Sample records for model reference approach

  1. Thermal radiation transfer calculations in combustion fields using the SLW model coupled with a modified reference approach

    NASA Astrophysics Data System (ADS)

    Darbandi, Masoud; Abrar, Bagher

    2018-01-01

    The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.

  2. A Practical Approach to Governance and Optimization of Structured Data Elements.

    PubMed

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  3. A standard satellite control reference model

    NASA Technical Reports Server (NTRS)

    Golden, Constance

    1994-01-01

    This paper describes a Satellite Control Reference Model that provides the basis for an approach to identify where standards would be beneficial in supporting space operations functions. The background and context for the development of the model and the approach are described. A process for using this reference model to trace top level interoperability directives to specific sets of engineering interface standards that must be implemented to meet these directives is discussed. Issues in developing a 'universal' reference model are also identified.

  4. Decentralized model reference adaptive control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan

    1988-01-01

    A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.

  5. The rank correlated SLW model of gas radiation in non-uniform media

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2017-08-01

    A comprehensive theoretical development of possible reference approaches in modelling of radiation transfer in non-uniform gaseous media is developed within the framework of the Generalized SLW Model. The notion of absorption spectrum ;correlation; adopted currently for global methods in gas radiation is critically revisited and replaced by a less restrictive concept of rank correlated spectrum. Within this framework it is shown that eight different reference approaches are possible, of which only three have been reported in the literature. Among the approaches presented is a novel Rank Correlated SLW Model, which is distinguished by the fact that i) it does not require the specification of a reference gas thermodynamic state, and ii) it preserves the emission term in the spectrally integrated Radiative Transfer Equation. Construction of this reference model requires only two absorption line blackbody distribution functions, and subdivision into gray gases can be performed using standard quadratures. Consequently, this new reference approach appears to have significant advantages over all other methods, and is, in general, a significant improvement in the global modelling of gas radiation. All reference approaches are summarized in the present work, and their use in radiative transfer prediction is demonstrated for simple example cases. Further, a detailed rigorous theoretical development of the improved methods is provided.

  6. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    PubMed

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  7. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

    PubMed Central

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985

  8. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  9. Modal-space reference-model-tracking fuzzy control of earthquake excited structures

    NASA Astrophysics Data System (ADS)

    Park, Kwan-Soon; Ok, Seung-Yong

    2015-01-01

    This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.

  10. Approaches to defining reference regimes for river restoration planning

    NASA Astrophysics Data System (ADS)

    Beechie, T. J.

    2014-12-01

    Reference conditions or reference regimes can be defined using three general approaches, historical analysis, contemporary reference sites, and theoretical or empirical models. For large features (e.g., floodplain channels and ponds) historical data and maps are generally reliable. For smaller features (e.g., pools and riffles in small tributaries), field data from contemporary reference sites are a reasonable surrogate for historical data. Models are generally used for features that have no historical information or present day reference sites (e.g., beaver pond habitat). Each of these approaches contributes to a watershed-wide understanding of current biophysical conditions relative to potential conditions, which helps create not only a guiding vision for restoration, but also helps quantify and locate the largest or most important restoration opportunities. Common uses of geomorphic and biological reference conditions include identifying key areas for habitat protection or restoration, and informing the choice of restoration targets. Examples of use of each of these three approaches to define reference regimes in western USA illustrate how historical information and current research highlight key restoration opportunities, focus restoration effort in areas that can produce the largest ecological benefit, and contribute to estimating restoration potential and assessing likelihood of achieving restoration goals.

  11. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A mapping closure for turbulent scalar mixing using a time-evolving reference field

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    A general mapping-closure approach for modeling scalar mixing in homogeneous turbulence is developed. This approach is different from the previous methods in that the reference field also evolves according to the same equations as the physical scalar field. The use of a time-evolving Gaussian reference field results in a model that is similar to the mapping closure model of Pope (1991), which is based on the methodology of Chen et al. (1989). Both models yield identical relationships between the scalar variance and higher-order moments, which are in good agreement with heat conduction simulation data and can be consistent with any type of epsilon(phi) evolution. The present methodology can be extended to any reference field whose behavior is known. The possibility of a beta-pdf reference field is explored. The shortcomings of the mapping closure methods are discussed, and the limit at which the mapping becomes invalid is identified.

  13. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  14. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  15. Team Approach to Staffing the Reference Center: A Speculation.

    ERIC Educational Resources Information Center

    Lawson, Mollie D.; And Others

    This document applies theories of participatory management to a proposal for a model that uses a team approach to staffing university library reference centers. In particular, the Ward Edwards Library at Central Missouri State University is examined in terms of the advantages and disadvantages of its current approach. Special attention is given to…

  16. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  17. An approach for modeling thermal destruction of hazardous wastes in circulating fluidized bed incinerator.

    PubMed

    Patil, M P; Sonolikar, R L

    2008-10-01

    This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.

  18. Frequency Response of an Aircraft Wing with Discrete Source Damage Using Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Eldred, Lloyd B.

    2007-01-01

    An equivalent plate procedure is developed to provide a computationally efficient means of matching the stiffness and frequencies of flight vehicle wing structures for prescribed loading conditions. Several new approaches are proposed and studied to match the stiffness and first five natural frequencies of the two reference models with and without damage. One approach divides the candidate reference plate into multiple zones in which stiffness and mass can be varied using a variety of materials including aluminum, graphite-epoxy, and foam-core graphite-epoxy sandwiches. Another approach places point masses along the edge of the stiffness-matched plate to tune the natural frequencies. Both approaches are successful at matching the stiffness and natural frequencies of the reference plates and provide useful insight into determination of crucial features in equivalent plate models of aircraft wing structures.

  19. An approach for formalising the supply chain operations

    NASA Astrophysics Data System (ADS)

    Zdravković, Milan; Panetto, Hervé; Trajanović, Miroslav; Aubry, Alexis

    2011-11-01

    Reference models play an important role in the knowledge management of the various complex collaboration domains (such as supply chain networks). However, they often show a lack of semantic precision and, they are sometimes incomplete. In this article, we present an approach to overcome semantic inconsistencies and incompleteness of the Supply Chain Operations Reference (SCOR) model and hence improve its usefulness and expand the application domain. First, we describe a literal web ontology language (OWL) specification of SCOR concepts (and related tools) built with the intention to preserve the original approach in the classification of process reference model entities, and hence enable the effectiveness of usage in original contexts. Next, we demonstrate the system for its exploitation, in specific - tools for SCOR framework browsing and rapid supply chain process configuration. Then, we describe the SCOR-Full ontology, its relations with relevant domain ontology and show how it can be exploited for improvement of SCOR ontological framework competence. Finally, we elaborate the potential impact of the presented approach, to interoperability of systems in supply chain networks.

  20. Relative motion using analytical differential gravity

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1988-01-01

    This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.

  1. On the role of modeling choices in estimation of cerebral aneurysm wall tension.

    PubMed

    Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L

    2012-11-15

    To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Advantages of a dual-tracer model over reference tissue models for binding potential measurement in tumors

    PubMed Central

    Tichauer, K M; Samkoe, K S; Klubben, W S; Hasan, T; Pogue, B W

    2012-01-01

    The quantification of tumor molecular expression in vivo could have a significant impact for informing and monitoring immerging targeted therapies in oncology. Molecular imaging of targeted tracers can be used to quantify receptor expression in the form of a binding potential (BP) if the arterial input curve or a surrogate of it is also measured. However, the assumptions of the most common approaches (reference tissue models) may not be valid for use in tumors. In this study, the validity of reference tissue models is investigated for use in tumors experimentally and in simulations. Three different tumor lines were grown subcutaneously in athymic mice and the mice were injected with a mixture of an epidermal growth factor receptor- (EGFR-) targeted fluorescent tracer and an untargeted fluorescent tracer. A one-compartment plasma input model demonstrated that the transport kinetics of both tracers were significantly different between tumors and all potential reference tissues, and using the reference tissue model resulted in a theoretical underestimation in BP of 50 ± 37%. On the other hand, the targeted and untargeted tracers demonstrated similar transport kinetics, allowing a dual-tracer approach to be employed to accurately estimate binding potential (with a theoretical error of 0.23 ± 9.07%). These findings highlight the potential for using a dual-tracer approach to quantify receptor expression in tumors with abnormal hemodynamics, possibly to inform the choice or progress of molecular cancer therapies. PMID:23022732

  3. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  4. Systems, Shocks and Time Bombs

    NASA Astrophysics Data System (ADS)

    Winder, Nick

    The following sections are included: * Introduction * Modelling strategies * Are time-bomb phenomena important? * Heuristic approaches to time-bomb phenomena * Three rational approaches to TBP * Two irrational approaches * Conclusions * References

  5. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  6. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  7. Impact of aerosol size representation on modeling aerosol-cloud interactions

    DOE PAGES

    Zhang, Y.; Easter, R. C.; Ghan, S. J.; ...

    2002-11-07

    In this study, we use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach.more » The modal approach with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less

  8. A proposed coast-wide reference monitoring system for evaluating Wetland restoration trajectories in Louisiana

    USGS Publications Warehouse

    Steyer, G.D.; Sasser, C.E.; Visser, J.M.; Swenson, E.M.; Nyman, J.A.; Raynie, R.C.

    2003-01-01

    Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.

  9. A proposed coast-wide reference monitoring system for evaluating wetland restoration trajectories in Louisiana.

    PubMed

    Steyer, Gregory D; Sasser, Charles E; Visser, Jenneke M; Swenson, Erick M; Nyman, John A; Raynie, Richard C

    2003-01-01

    Wetland restoration efforts conducted in Louisiana under the Coastal Wetlands Planning, Protection and Restoration Act require monitoring the effectiveness of individual projects as well as monitoring the cumulative effects of all projects in restoring, creating, enhancing, and protecting the coastal landscape. The effectiveness of the traditional paired-reference monitoring approach in Louisiana has been limited because of difficulty in finding comparable reference sites. A multiple reference approach is proposed that uses aspects of hydrogeomorphic functional assessments and probabilistic sampling. This approach includes a suite of sites that encompass the range of ecological condition for each stratum, with projects placed on a continuum of conditions found for that stratum. Trajectories in reference sites through time are then compared with project trajectories through time. Plant community zonation complicated selection of indicators, strata, and sample size. The approach proposed could serve as a model for evaluating wetland ecosystems.

  10. THE FUTURE OF COMPUTER-BASED TOXICITY PREDICTION: MECHANISM-BASED MODELS VS. INFORMATION MINING APPROACHES

    EPA Science Inventory


    The Future of Computer-Based Toxicity Prediction:
    Mechanism-Based Models vs. Information Mining Approaches

    When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...

  11. Panel C report: Standards needed for the use of ISO Open Systems Interconnection - basic reference model

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The use of an International Standards Organization (ISO) Open Systems Interconnection (OSI) Reference Model and its relevance to interconnecting an Applications Data Service (ADS) pilot program for data sharing is discussed. A top level mapping between the conjectured ADS requirements and identified layers within the OSI Reference Model was performed. It was concluded that the OSI model represents an orderly architecture for the ADS networking planning and that the protocols being developed by the National Bureau of Standards offer the best available implementation approach.

  12. Model reference, sliding mode adaptive control for flexible structures

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Ozguner, U.; Al-Abbass, F.

    1988-01-01

    A decentralized model reference adaptive approach using a variable-structure sliding model control has been developed for the vibration suppression of large flexible structures. Local models are derived based upon the desired damping and response time in a model-following scheme, and variable structure controllers are then designed which employ colocated angular rate and position feedback. Numerical simulations have been performed using NASA's flexible grid experimental apparatus.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  14. Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.

    2013-12-01

    The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.

  15. Control of Systems With Slow Actuators Using Time Scale Separation

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vehram; Nguyen, Nhan

    2009-01-01

    This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.

  16. Research in Distance Education: A System Modeling Approach.

    ERIC Educational Resources Information Center

    Saba, Farhad; Twitchell, David

    1988-01-01

    Describes how a computer simulation research method can be used for studying distance education systems. Topics discussed include systems research in distance education; a technique of model development using the System Dynamics approach and DYNAMO simulation language; and a computer simulation of a prototype model. (18 references) (LRW)

  17. Using physiologically based pharmacokinetic modeling to address nonlinear kinetics and changes in rodent physiology and metabolism due to aging and adaptation in deriving reference values for propylene glycol methyl ether and propylene glycol methyl ether acetate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirman, C R.; Sweeney, Lisa M.; Corley, Rick A.

    2005-04-01

    Reference values, including an oral reference dose (RfD) and an inhalation reference concentration (RfC), were derived for propylene glycol methyl ether (PGME), and an oral RfD was derived for its acetate (PGMEA). These values were based upon transient sedation observed in F344 rats and B6C3F1 mice during a two-year inhalation study. The dose-response relationship for sedation was characterized using internal dose measures as predicted by a physiologically based pharmacokinetic (PBPK) model for PGME and its acetate. PBPK modeling was used to account for changes in rodent physiology and metabolism due to aging and adaptation, based on data collected during weeksmore » 1, 2, 26, 52, and 78 of a chronic inhalation study. The peak concentration of PGME in richly perfused tissues was selected as the most appropriate internal dose measure based upon a consideration of the mode of action for sedation and similarities in tissue partitioning between brain and other richly perfused tissues. Internal doses (peak tissue concentrations of PGME) were designated as either no-observed-adverse-effect levels (NOAELs) or lowest-observed-adverse-effect levels (LOAELs) based upon the presence or absence of sedation at each time-point, species, and sex in the two year study. Distributions of the NOAEL and LOAEL values expressed in terms of internal dose were characterized using an arithmetic mean and standard deviation, with the mean internal NOAEL serving as the basis for the reference values, which was then divided by appropriate uncertainty factors. Where data were permitting, chemical-specific adjustment factors were derived to replace default uncertainty factor values of ten. Nonlinear kinetics are were predicted by the model in all species at PGME concentrations exceeding 100 ppm, which complicates interspecies and low-dose extrapolations. To address this complication, reference values were derived using two approaches which differ with respect to the order in which these extrapolations were performed: (1) uncertainty factor application followed by interspecies extrapolation (PBPK modeling); and (2) interspecies extrapolation followed by uncertainty factor application. The resulting reference values for these two approaches are substantially different, with values from the former approach being 7-fold higher than those from the latter approach. Such a striking difference between the two approaches reveals an underlying issue that has received little attention in the literature regarding the application of uncertainty factors and interspecies extrapolations to compounds where saturable kinetics occur in the range of the NOAEL. Until such discussions have taken place, reference values based on the latter approach are recommended for risk assessments involving human exposures to PGME and PGMEA.« less

  18. Simulation Model for the Piper PA-30 Light Maneuverable Aircraft in the Final Approach

    DOT National Transportation Integrated Search

    1971-07-01

    The report describes the Piper PA-30 'Twin Comanche' aircraft and a representative autopilot during the final approach configuration for simulation purposes. The aircraft is modeled by linearized six-degree-of-freedom perturbation equations reference...

  19. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  20. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  1. Adaptive Control with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example

  2. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  3. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  4. Observation-Oriented Modeling: Going beyond "Is It All a Matter of Chance"?

    ERIC Educational Resources Information Center

    Grice, James W.; Yepez, Maria; Wilson, Nicole L.; Shoda, Yuichi

    2017-01-01

    An alternative to null hypothesis significance testing is presented and discussed. This approach, referred to as observation-oriented modeling, is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. In terms of analysis, this novel approach complements traditional methods…

  5. Structural model constructing for optical handwritten character recognition

    NASA Astrophysics Data System (ADS)

    Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.

    2017-02-01

    The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.

  6. Periodic reference tracking control approach for smart material actuators with complex hysteretic characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Zhiyong; Hao, Lina; Song, Bo; Yang, Ruiguo; Cao, Ruimin; Cheng, Yu

    2016-10-01

    Micro/nano positioning technologies have been attractive for decades for their various applications in both industrial and scientific fields. The actuators employed in these technologies are typically smart material actuators, which possess inherent hysteresis that may cause systems behave unexpectedly. Periodic reference tracking capability is fundamental for apparatuses such as scanning probe microscope, which employs smart material actuators to generate periodic scanning motion. However, traditional controller such as PID method cannot guarantee accurate fast periodic scanning motion. To tackle this problem and to conduct practical implementation in digital devices, this paper proposes a novel control method named discrete extended unparallel Prandtl-Ishlinskii model based internal model (d-EUPI-IM) control approach. To tackle modeling uncertainties, the robust d-EUPI-IM control approach is investigated, and the associated sufficient stabilizing conditions are derived. The advantages of the proposed controller are: it is designed and represented in discrete form, thus practical for digital devices implementation; the extended unparallel Prandtl-Ishlinskii model can precisely represent forward/inverse complex hysteretic characteristics, thus can reduce modeling uncertainties and benefits controllers design; in addition, the internal model principle based control module can be utilized as a natural oscillator for tackling periodic references tracking problem. The proposed controller was verified through comparative experiments on a piezoelectric actuator platform, and convincing results have been achieved.

  7. Assessing the Moral Coherence and Moral Robustness of Social Systems: Proof of Concept for a Graphical Models Approach.

    PubMed

    Hoss, Frauke; London, Alex John

    2016-12-01

    This paper presents a proof of concept for a graphical models approach to assessing the moral coherence and moral robustness of systems of social interactions. "Moral coherence" refers to the degree to which the rights and duties of agents within a system are effectively respected when agents in the system comply with the rights and duties that are recognized as in force for the relevant context of interaction. "Moral robustness" refers to the degree to which a system of social interaction is configured to ensure that the interests of agents are effectively respected even in the face of noncompliance. Using the case of conscientious objection of pharmacists to filling prescriptions for emergency contraception as an example, we illustrate how a graphical models approach can help stakeholders identify structural weaknesses in systems of social interaction and evaluate the relative merits of alternate organizational structures. By illustrating the merits of a graphical models approach we hope to spur further developments in this area.

  8. Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.

  9. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  11. Teaching Writing within the Common European Framework of Reference (CEFR): A Supplement Asynchronous Blended Learning Approach in an EFL Undergraduate Course in Egypt

    ERIC Educational Resources Information Center

    Shaarawy, Hanaa Youssef; Lotfy, Nohayer Esmat

    2013-01-01

    Based on the Common European Framework of Reference (CEFR) and following a blended learning approach (a supplement model), this article reports on a quasi-experiment where writing was taught evenly with other language skills in everyday language contexts and where asynchronous online activities were required from students to extend learning beyond…

  12. Managed Development Environment Successes for MSFC's VIPA Team

    NASA Technical Reports Server (NTRS)

    Finckenor, Jeff; Corder, Gary; Owens, James; Meehan, Jim; Tidwell, Paul H.

    2005-01-01

    This paper outlines the best practices of the Vehicle Design Team for VIPA. The functions of the VIPA Vehicle Design (VVD) discipline team are to maintain the controlled reference geometry and provide linked, simplified geometry for each of the other discipline analyses. The core of the VVD work, and the approach for VVD s first task of controlling the reference geometry, involves systems engineering, top-down, layout-based CAD modeling within a Product Data Manager (PDM) development environment. The top- down approach allows for simple control of very large, integrated assemblies and greatly enhances the ability to generate trade configurations and reuse data. The second VVD task, model simplification for analysis, is handled within the managed environment through application of the master model concept. In this approach, there is a single controlling, or master, product definition dataset. Connected to this master model are reference datasets with live geometric and expression links. The referenced models can be for drawings, manufacturing, visualization, embedded analysis, or analysis simplification. A discussion of web based interaction, including visualization, between the design and other disciplines is included. Demonstrated examples are cited, including the Space Launch Initiative development cycle, the Saturn V systems integration and verification cycle, an Orbital Space Plane study, and NASA Exploration Office studies of Shuttle derived and clean sheet launch vehicles. The VIPA Team has brought an immense amount of detailed data to bear on program issues. A central piece of that success has been the Managed Development Environment and the VVD Team approach to modeling.

  13. Comparison of Kinetic Models for Dual-Tracer Receptor Concentration Imaging in Tumors

    PubMed Central

    Hamzei, Nazanin; Samkoe, Kimberley S; Elliott, Jonathan T; Holt, Robert W; Gunn, Jason R; Hasan, Tayyaba; Pogue, Brian W; Tichauer, Kenneth M

    2014-01-01

    Molecular differences between cancerous and healthy tissue have become key targets for novel therapeutics specific to tumor receptors. However, cancer cell receptor expression can vary within and amongst different tumors, making strategies that can quantify receptor concentration in vivo critical for the progression of targeted therapies. Recently a dual-tracer imaging approach capable of providing quantitative measures of receptor concentration in vivo was developed. It relies on the simultaneous injection and imaging of receptor-targeted tracer and an untargeted tracer (to account for non-specific uptake of the targeted tracer). Early implementations of this approach have been structured on existing “reference tissue” imaging methods that have not been optimized for or validated in dual-tracer imaging. Using simulations and mouse tumor model experimental data, the salient findings in this study were that all widely used reference tissue kinetic models can be used for dual-tracer imaging, with the linearized simplified reference tissue model offering a good balance of accuracy and computational efficiency. Moreover, an alternate version of the full two-compartment reference tissue model can be employed accurately by assuming that the K1s of the targeted and untargeted tracers are similar to avoid assuming an instantaneous equilibrium between bound and free states (made by all other models). PMID:25414912

  14. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  15. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.

    PubMed

    Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua

    2014-10-01

    This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.

  16. Modeling Educational Content: The Cognitive Approach of the PALO Language

    ERIC Educational Resources Information Center

    Rodriguez-Artacho, Miguel; Verdejo Maillo, M. Felisa

    2004-01-01

    This paper presents a reference framework to describe educational material. It introduces the PALO Language as a cognitive based approach to Educational Modeling Languages (EML). In accordance with recent trends for reusability and interoperability in Learning Technologies, EML constitutes an evolution of the current content-centered…

  17. Improvement of radiology services based on the process management approach.

    PubMed

    Amaral, Creusa Sayuri Tahara; Rozenfeld, Henrique; Costa, Janaina Mascarenhas Hornos; Magon, Maria de Fátima de Andrade; Mascarenhas, Yvone Maria

    2011-06-01

    The health sector requires continuous investments to ensure the improvement of products and services from a technological standpoint, the use of new materials, equipment and tools, and the application of process management methods. Methods associated with the process management approach, such as the development of reference models of business processes, can provide significant innovations in the health sector and respond to the current market trend for modern management in this sector (Gunderman et al. (2008) [4]). This article proposes a process model for diagnostic medical X-ray imaging, from which it derives a primary reference model and describes how this information leads to gains in quality and improvements. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Expert Systems for Libraries at SCIL [Small Computers in Libraries]'88.

    ERIC Educational Resources Information Center

    Kochtanek, Thomas R.; And Others

    1988-01-01

    Six brief papers on expert systems for libraries cover (1) a knowledge-based approach to database design; (2) getting started in expert systems; (3) using public domain software to develop a business reference system; (4) a music cataloging inquiry system; (5) linguistic analysis of reference transactions; and (6) a model of a reference librarian.…

  19. Quality assessment for color reproduction using a blind metric

    NASA Astrophysics Data System (ADS)

    Bringier, B.; Quintard, L.; Larabi, M.-C.

    2007-01-01

    This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.

  20. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE

  1. Using eddy covariance and flux partitioning to assess basal, soil, and stress coefficients for crop evapotranspiration models

    USDA-ARS?s Scientific Manuscript database

    Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, indepe...

  2. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    NASA Astrophysics Data System (ADS)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  3. Multi-model inference for incorporating trophic and climate uncertainty into stock assessments

    NASA Astrophysics Data System (ADS)

    Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim

    2016-12-01

    Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.

  4. Oil and Gas Supply Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Defines the objectives of the Oil and Gas Supply Model (OGSM), to describe the model's basic approach, and to provide detail on how the model works. This report is intended as a reference document for model analysts, users, and the public.

  5. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  6. Determination of reference ranges for elements in human scalp hair.

    PubMed

    Druyan, M E; Bass, D; Puchyr, R; Urek, K; Quig, D; Harmon, E; Marquardt, W

    1998-06-01

    Expected values, reference ranges, or reference limits are necessary to enable clinicians to apply analytical chemical data in the delivery of health care. Determination of references ranges is not straightforward in terms of either selecting a reference population or performing statistical analysis. In light of logistical, scientific, and economic obstacles, it is understandable that clinical laboratories often combine approaches in developing health associated reference values. A laboratory may choose to: 1. Validate either reference ranges of other laboratories or published data from clinical research or both, through comparison of patients test data. 2. Base the laboratory's reference values on statistical analysis of results from specimens assayed by the clinical reference laboratory itself. 3. Adopt standards or recommendations of regulatory agencies and governmental bodies. 4. Initiate population studies to validate transferred reference ranges or to determine them anew. Effects of external contamination and anecdotal information from clinicians may be considered. The clinical utility of hair analysis is well accepted for some elements. For others, it remains in the realm of clinical investigation. This article elucidates an approach for establishment of reference ranges for elements in human scalp hair. Observed levels of analytes from hair specimens from both our laboratory's total patient population and from a physician-defined healthy American population have been evaluated. Examination of levels of elements often associated with toxicity serves to exemplify the process of determining reference ranges in hair. In addition the approach serves as a model for setting reference ranges for analytes in a variety of matrices.

  7. Osculating Relative Orbit Elements Resulting from Chief Eccentricity and J2 Perturbing Forces

    DTIC Science & Technology

    2011-03-01

    significant importance to the analytical investigation in this study and is described in depth in Section 3.1.1. There do exist approaches to mapping the...necessary to introduce the environment which the majority of models describe. 2.2.1 Inertial Reference Frame. A geocentric reference frame will be used for...closest approach , modifying the period and minima locations of the radial and in-track components. This change impacts the periodicity of the radial

  8. Quantifying Overdiagnosis in Cancer Screening: A Systematic Review to Evaluate the Methodology.

    PubMed

    Ripping, Theodora M; Ten Haaf, Kevin; Verbeek, André L M; van Ravesteyn, Nicolien T; Broeders, Mireille J M

    2017-10-01

    Overdiagnosis is the main harm of cancer screening programs but is difficult to quantify. This review aims to evaluate existing approaches to estimate the magnitude of overdiagnosis in cancer screening in order to gain insight into the strengths and limitations of these approaches and to provide researchers with guidance to obtain reliable estimates of overdiagnosis in cancer screening. A systematic review was done of primary research studies in PubMed that were published before January 1, 2016, and quantified overdiagnosis in breast cancer screening. The studies meeting inclusion criteria were then categorized by their methods to adjust for lead time and to obtain an unscreened reference population. For each approach, we provide an overview of the data required, assumptions made, limitations, and strengths. A total of 442 studies were identified in the initial search. Forty studies met the inclusion criteria for the qualitative review. We grouped the approaches to adjust for lead time in two main categories: the lead time approach and the excess incidence approach. The lead time approach was further subdivided into the mean lead time approach, lead time distribution approach, and natural history modeling. The excess incidence approach was subdivided into the cumulative incidence approach and early vs late-stage cancer approach. The approaches used to obtain an unscreened reference population were grouped into the following categories: control group of a randomized controlled trial, nonattenders, control region, extrapolation of a prescreening trend, uninvited groups, adjustment for the effect of screening, and natural history modeling. Each approach to adjust for lead time and obtain an unscreened reference population has its own strengths and limitations, which should be taken into consideration when estimating overdiagnosis. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  10. Intercomparison Of Approaches For Modeling Second Order Ionospheric Corrections Using Gnss Measurements

    NASA Astrophysics Data System (ADS)

    Garcia Fernandez, M.; Butala, M.; Komjathy, A.; Desai, S. D.

    2012-12-01

    Correcting GNSS tracking data for the effects of second order ionospheric effects have been shown to cause a southward shift in GNSS-based precise point positioning solutions by as much as 10 mm, depending on the solar cycle conditions. The most commonly used approaches for modeling the higher order ionospheric effect include, (a) the use of global ionosphere maps to determine vertical total electron content (VTEC) and convert to slant TEC (STEC) assuming a thin shell ionosphere, and (b) using the dual-frequency measurements themselves to determine STEC. The latter approach benefits from not requiring ionospheric mapping functions between VTEC and STEC. However, this approach will require calibrations with receiver and transmitter Differential Code Biases (DCBs). We present results from comparisons of the two approaches. For the first approach, we also compare the use of VTEC observations from IONEX maps compared to climatological model-derived VTEC as provided by the International Reference Ionosphere (IRI2012). We consider various metrics to evaluate the relative performance of the different approaches, including station repeatability, GNSS-based reference frame recovery, and post-fit measurement residuals. Overall, the GIM-based approaches tend to provide lower noise in second order ionosphere correction and positioning solutions. The use of IONEX and IRI2012 models of VTEC provide similar results, especially in periods of low solar activity periods. The use of the IRI2012 model provides a convenient approach for operational scenarios by eliminating the dependence on routine updates of the GIMs, and also serves as a useful source of VTEC when IONEX maps may not be readily available.

  11. Use of Two-Part Regression Calibration Model to Correct for Measurement Error in Episodically Consumed Foods in a Single-Replicate Study Design: EPIC Case Study

    PubMed Central

    Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487

  12. Estimating Causal Effects in Mediation Analysis Using Propensity Scores

    ERIC Educational Resources Information Center

    Coffman, Donna L.

    2011-01-01

    Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…

  13. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  14. A Regression-Based Family of Measures for Full-Reference Image Quality Assessment

    NASA Astrophysics Data System (ADS)

    Oszust, Mariusz

    2016-12-01

    The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.

  15. 0-6759 : developing a business process and logical model to support a tour-based travel demand model design for TxDOT.

    DOT National Transportation Integrated Search

    2013-08-01

    The Texas Department of Transportation : (TxDOT) created a standardized trip-based : modeling approach for travel demand modeling : called the Texas Package Suite of Travel Demand : Models (referred to as the Texas Package) to : oversee the travel de...

  16. A reference model for model-based design of critical infrastructure protection systems

    NASA Astrophysics Data System (ADS)

    Shin, Young Don; Park, Cheol Young; Lee, Jae-Chon

    2015-05-01

    Today's war field environment is getting versatile as the activities of unconventional wars such as terrorist attacks and cyber-attacks have noticeably increased lately. The damage caused by such unconventional wars has also turned out to be serious particularly if targets are critical infrastructures that are constructed in support of banking and finance, transportation, power, information and communication, government, and so on. The critical infrastructures are usually interconnected to each other and thus are very vulnerable to attack. As such, to ensure the security of critical infrastructures is very important and thus the concept of critical infrastructure protection (CIP) has come. The program to realize the CIP at national level becomes the form of statute in each country. On the other hand, it is also needed to protect each individual critical infrastructure. The objective of this paper is to study on an effort to do so, which can be called the CIP system (CIPS). There could be a variety of ways to design CIPS's. Instead of considering the design of each individual CIPS, a reference model-based approach is taken in this paper. The reference model represents the design of all the CIPS's that have many design elements in common. In addition, the development of the reference model is also carried out using a variety of model diagrams. The modeling language used therein is the systems modeling language (SysML), which was developed and is managed by Object Management Group (OMG) and a de facto standard. Using SysML, the structure and operational concept of the reference model are designed to fulfil the goal of CIPS's, resulting in the block definition and activity diagrams. As a case study, the operational scenario of the nuclear power plant while being attacked by terrorists is studied using the reference model. The effectiveness of the results is also analyzed using multiple analysis models. It is thus expected that the approach taken here has some merits over the traditional design methodology of repeating requirements analysis and system design.

  17. Exploring behavior of an unusual megaherbivore: A spatially explicit foraging model of the hippopotamus

    USGS Publications Warehouse

    Lewison, R.L.; Carter, J.

    2004-01-01

    Herbivore foraging theories have been developed for and tested on herbivores across a range of sizes. Due to logistical constraints, however, little research has focused on foraging behavior of megaherbivores. Here we present a research approach that explores megaherbivore foraging behavior, and assesses the applicability of foraging theories developed on smaller herbivores to megafauna. With simulation models as reference points for the analysis of empirical data, we investigate foraging strategies of the common hippopotamus (Hippopotamus amphibius). Using a spatially explicit individual based foraging model, we apply traditional herbivore foraging strategies to a model hippopotamus, compare model output, and then relate these results to field data from wild hippopotami. Hippopotami appear to employ foraging strategies that respond to vegetation characteristics, such as vegetation quality, as well as spatial reference information, namely distance to a water source. Model predictions, field observations, and comparisons of the two support that hippopotami generally conform to the central place foraging construct. These analyses point to the applicability of general herbivore foraging concepts to megaherbivores, but also point to important differences between hippopotami and other herbivores. Our synergistic approach of models as reference points for empirical data highlights a useful method of behavioral analysis for hard-to-study megafauna. ?? 2003 Elsevier B.V. All rights reserved.

  18. The Cannon: A data-driven approach to Stellar Label Determination

    NASA Astrophysics Data System (ADS)

    Ness, M.; Hogg, David W.; Rix, H.-W.; Ho, Anna. Y. Q.; Zasowski, G.

    2015-07-01

    New spectroscopic surveys offer the promise of stellar parameters and abundances (“stellar labels”) for hundreds of thousands of stars; this poses a formidable spectral modeling challenge. In many cases, there is a subset of reference objects for which the stellar labels are known with high(er) fidelity. We take advantage of this with The Cannon, a new data-driven approach for determining stellar labels from spectroscopic data. The Cannon learns from the “known” labels of reference stars how the continuum-normalized spectra depend on these labels by fitting a flexible model at each wavelength; then, The Cannon uses this model to derive labels for the remaining survey stars. We illustrate The Cannon by training the model on only 542 stars in 19 clusters as reference objects, with {T}{eff}, {log} g, and [{Fe}/{{H}}] as the labels, and then applying it to the spectra of 55,000 stars from APOGEE DR10. The Cannon is very accurate. Its stellar labels compare well to the stars for which APOGEE pipeline (ASPCAP) labels are provided in DR10, with rms differences that are basically identical to the stated ASPCAP uncertainties. Beyond the reference labels, The Cannon makes no use of stellar models nor any line-list, but needs a set of reference objects that span label-space. The Cannon performs well at lower signal-to-noise, as it delivers comparably good labels even at one-ninth the APOGEE observing time. We discuss the limitations of The Cannon and its future potential, particularly, to bring different spectroscopic surveys onto a consistent scale of stellar labels.

  19. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  20. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  1. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  2. New alternatives for reference evapotranspiration estimation in West Africa using limited weather data and ancillary data supply strategies.

    NASA Astrophysics Data System (ADS)

    Landeras, Gorka; Bekoe, Emmanuel; Ampofo, Joseph; Logah, Frederick; Diop, Mbaye; Cisse, Madiama; Shiri, Jalal

    2018-05-01

    Accurate estimation of reference evapotranspiration ( ET 0 ) is essential for the computation of crop water requirements, irrigation scheduling, and water resources management. In this context, having a battery of alternative local calibrated ET 0 estimation methods is of great interest for any irrigation advisory service. The development of irrigation advisory services will be a major breakthrough for West African agriculture. In the case of many West African countries, the high number of meteorological inputs required by the Penman-Monteith equation has been indicated as constraining. The present paper investigates for the first time in Ghana, the estimation ability of artificial intelligence-based models (Artificial Neural Networks (ANNs) and Gene Expression Programing (GEPs)), and ancillary/external approaches for modeling reference evapotranspiration ( ET 0 ) using limited weather data. According to the results of this study, GEPs have emerged as a very interesting alternative for ET 0 estimation at all the locations of Ghana which have been evaluated in this study under different scenarios of meteorological data availability. The adoption of ancillary/external approaches has been also successful, moreover in the southern locations. The interesting results obtained in this study using GEPs and some ancillary approaches could be a reference for future studies about ET 0 estimation in West Africa.

  3. Modular Approach for Ethics

    ERIC Educational Resources Information Center

    Wyne, Mudasser F.

    2010-01-01

    It is hard to define a single set of ethics that will cover an entire computer users community. In this paper, the issue is addressed in reference to code of ethics implemented by various professionals, institutes and organizations. The paper presents a higher level model using hierarchical approach. The code developed using this approach could be…

  4. Using a logical information model-driven design process in healthcare.

    PubMed

    Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen

    2011-01-01

    A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.

  5. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  6. MRAC Revisited: Guaranteed Performance with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmaje

    2010-01-01

    This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.

  7. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.

  8. A methodology to enable rapid evaluation of aviation environmental impacts and aircraft technologies

    NASA Astrophysics Data System (ADS)

    Becker, Keith Frederick

    Commercial aviation has become an integral part of modern society and enables unprecedented global connectivity by increasing rapid business, cultural, and personal connectivity. In the decades following World War II, passenger travel through commercial aviation quickly grew at a rate of roughly 8% per year globally. The FAA's most recent Terminal Area Forecast predicts growth to continue at a rate of 2.5% domestically, and the market outlooks produced by Airbus and Boeing generally predict growth to continue at a rate of 5% per year globally over the next several decades, which translates into a need for up to 30,000 new aircraft produced by 2025. With such large numbers of new aircraft potentially entering service, any negative consequences of commercial aviation must undergo examination and mitigation by governing bodies so that growth may still be achieved. Options to simultaneously grow while reducing environmental impact include evolution of the commercial fleet through changes in operations, aircraft mix, and technology adoption. Methods to rapidly evaluate fleet environmental metrics are needed to enable decision makers to quickly compare the impact of different scenarios and weigh the impact of multiple policy options. As the fleet evolves, interdependencies may emerge in the form of tradeoffs between improvements in different environmental metrics as new technologies are brought into service. In order to include the impacts of these interdependencies on fleet evolution, physics-based modeling is required at the appropriate level of fidelity. Evaluation of environmental metrics in a physics-based manner can be done at the individual aircraft level, but will then not capture aggregate fleet metrics. Contrastingly, evaluation of environmental metrics at the fleet level is already being done for aircraft in the commercial fleet, but current tools and approaches require enhancement because they currently capture technology implementation through post-processing, which does not capture physical interdependencies that may arise at the aircraft-level. The goal of the work that has been conducted here was the development of a methodology to develop surrogate fleet approaches that leverage the capability of physics-based aircraft models and the development of connectivity to fleet-level analysis tools to enable rapid evaluation of fuel burn and emissions metrics. Instead of requiring development of an individual physics-based model for each vehicle in the fleet, the surrogate fleet approaches seek to reduce the number of such models needed while still accurately capturing performance of the fleet. By reducing the number of models, both development time and execution time to generate fleet-level results may also be reduced. The initial steps leading to surrogate fleet formulation were a characterization of the commercial fleet into groups based on capability followed by the selection of a reference vehicle model and a reference set of operations for each group. Next, three potential surrogate fleet approaches were formulated. These approaches include the parametric correction factor approach, in which the results of a reference vehicle model are corrected to match the aggregate results of each group; the average replacement approach, in which a new vehicle model is developed to generate aggregate results of each group, and the best-in-class replacement approach, in which results for a reference vehicle are simply substituted for the entire group. Once candidate surrogate fleet approaches were developed, they were each applied to and evaluated over the set of reference operations. Then each approach was evaluated for their ability to model variations in operations. Finally, the ability of each surrogate fleet approach to capture implementation of different technology suites along with corresponding interdependencies between fuel burn and emissions was evaluated using the concept of a virtual fleet to simulate the technology response of multiple aircraft families. The results of experimentation led to a down selection to the best approach to use to rapidly characterize the performance of the commercial fleet for accurately in the context of acceptability of current fleet evaluation methods. The parametric correction factor and average replacement approaches were shown to be successful in capturing reference fleet results as well as fleet performance with variations in operations. The best-in-class replacement approach was shown to be unacceptable as a model for the larger fleet in each of the scenarios tested. Finally, the average replacement approach was the only one that was successful in capturing the impact of technologies on a larger fleet. These results are meaningful because they show that it is possible to calculate the fuel burn and emissions of a larger fleet with a reduced number of physics-based models within acceptable bounds of accuracy. At the same time, the physics-based modeling also provides the ability to evaluate the impact of technologies on fleet-level fuel burn and emissions metrics. The value of such a capability is that multiple future fleet scenarios involving changes in both aircraft operations and technology levels may now be rapidly evaluated to inform and equip policy makers of the implications of impacts of changes on fleet-level metrics.

  9. Comparing Information Access Approaches.

    ERIC Educational Resources Information Center

    Chalmers, Matthew

    1999-01-01

    Presents a broad view of information access, drawing from philosophy and semiology in constructing a framework for comparative discussion that is used to examine the information representations that underlie four approaches to information access--information retrieval, workflow, collaborative filtering, and the path model. Contains 32 references.…

  10. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  11. Pre-Service Teachers' Flexibility with Referent Units in Solving a Fraction Division Problem

    ERIC Educational Resources Information Center

    Lee, Mi Yeon

    2017-01-01

    This study investigated 111 pre-service teachers' (PSTs') flexibility with referent units in solving a fraction division problem using a length model. Participants' written solutions to a measurement fraction division problem were analyzed in terms of strategies and types of errors, using an inductive content analysis approach. Findings suggest…

  12. Template-based protein-protein docking exploiting pairwise interfacial residue restraints.

    PubMed

    Xue, Li C; Rodrigues, João P G L M; Dobbs, Drena; Honavar, Vasant; Bonvin, Alexandre M J J

    2017-05-01

    Although many advanced and sophisticated ab initio approaches for modeling protein-protein complexes have been proposed in past decades, template-based modeling (TBM) remains the most accurate and widely used approach, given a reliable template is available. However, there are many different ways to exploit template information in the modeling process. Here, we systematically evaluate and benchmark a TBM method that uses conserved interfacial residue pairs as docking distance restraints [referred to as alpha carbon-alpha carbon (CA-CA)-guided docking]. We compare it with two other template-based protein-protein modeling approaches, including a conserved non-pairwise interfacial residue restrained docking approach [referred to as the ambiguous interaction restraint (AIR)-guided docking] and a simple superposition-based modeling approach. Our results show that, for most cases, the CA-CA-guided docking method outperforms both superposition with refinement and the AIR-guided docking method. We emphasize the superiority of the CA-CA-guided docking on cases with medium to large conformational changes, and interactions mediated through loops, tails or disordered regions. Our results also underscore the importance of a proper refinement of superimposition models to reduce steric clashes. In summary, we provide a benchmarked TBM protocol that uses conserved pairwise interface distance as restraints in generating realistic 3D protein-protein interaction models, when reliable templates are available. The described CA-CA-guided docking protocol is based on the HADDOCK platform, which allows users to incorporate additional prior knowledge of the target system to further improve the quality of the resulting models. © The Author 2016. Published by Oxford University Press.

  13. Correction Approach for Delta Function Convolution Model Fitting of Fluorescence Decay Data in the Case of a Monoexponential Reference Fluorophore.

    PubMed

    Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris

    2015-09-01

    A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.

  14. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    PubMed

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  15. Evaluation of different approaches to modeling the second-order ionospheric delay on GPS measurements

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, M.; Desai, S. D.; Butala, M. D.; Komjathy, A.

    2013-12-01

    This work evaluates various approaches to compute the second order ionospheric correction (SOIC) to Global Positioning System (GPS) measurements. When estimating the reference frame using GPS, applying this correction is known to primarily affect the realization of the origin of the Earth's reference frame along the spin axis (Z coordinate). Therefore, the Z translation relative to the International Terrestrial Reference Frame 2008 is used as the metric to evaluate various published approaches to determining the slant total electron content (TEC) for the SOIC: getting the slant TEC from GPS measurements, and using the vertical total electron content (TEC) given by a Global Ionospheric Model (GIM) to transform it to slant TEC via a mapping function. All of these approaches agree to 1 mm if the ionospheric shell height needed in GIM-based approaches is set to 600 km. The commonly used shell height of 450 km introduces an offset of 1 to 2 mm. When the SOIC is not applied, the Z axis translation can be reasonably modeled with a ratio of +0.23 mm/TEC units of the daily median GIM vertical TEC. Also, precise point positioning (PPP) solutions (positions and clocks) determined with and without SOIC differ by less than 1 mm only if they are based upon GPS orbit and clock solutions that have consistently applied or not applied the correction, respectively. Otherwise, deviations of few millimeters in the north component of the PPP solutions can arise due to inconsistencies with the satellite orbit and clock products, and those deviations exhibit a dependency on solar cycle conditions.

  16. A Frequency Domain Approach to Pretest Analysis Model Correlation and Model Updating for the Mid-Frequency Range

    DTIC Science & Technology

    2009-02-01

    range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The corresponding...frequency range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The...predictions. The averaging process is consistent with the averaging done in statistical energy analysis for stochastic systems. The FEM will always

  17. Developing Dynamic Reference Models and a Decision Support Framework for Southeastern Ecosystems: An Integrated Approach

    DTIC Science & Technology

    2015-06-01

    the contents be construed as reflecting the official policy or position of the Department of Defense. Reference herein to any specific commercial ...the number of territories occupied by either a solitary male or a breeding pair at Eglin AFB from 2000 to 2013 ............................. 55...in the context of a dynamic target. The reference sites in this study became more species rich, achieved greater abundance of understory plants , and

  18. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    NASA Astrophysics Data System (ADS)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  19. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    PubMed

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  20. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  1. The Relationship between Conceptions of Teaching and Approaches to Teaching

    ERIC Educational Resources Information Center

    Lam, Bick-Har; Kember, David

    2006-01-01

    The relationship between conceptions of teaching and approaches to teaching was explored in a study of 18 secondary school art teachers in Hong Kong. Conceptions of teaching approaches were fitted to a four-category model. Each of the categories was distinguished by reference to six relevant dimensions. As is the case in higher education,…

  2. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.

  3. A new approach of active compliance control via fuzzy logic control for multifingered robot hand

    NASA Astrophysics Data System (ADS)

    Jamil, M. F. A.; Jalani, J.; Ahmad, A.

    2016-07-01

    Safety is a vital issue in Human-Robot Interaction (HRI). In order to guarantee safety in HRI, a model reference impedance control can be a very useful approach introducing a compliant control. In particular, this paper establishes a fuzzy logic compliance control (i.e. active compliance control) to reduce impact and forces during physical interaction between humans/objects and robots. Exploiting a virtual mass-spring-damper system allows us to determine a desired compliant level by understanding the behavior of the model reference impedance control. The performance of fuzzy logic compliant control is tested in simulation for a robotic hand known as the RED Hand. The results show that the fuzzy logic is a feasible control approach, particularly to control position and to provide compliant control. In addition, the fuzzy logic control allows us to simplify the controller design process (i.e. avoid complex computation) when dealing with nonlinearities and uncertainties.

  4. Effectiveness of Training Model Capacity Building for Entrepreneurship Women Based Empowerment Community

    ERIC Educational Resources Information Center

    Idawati; Mahmud, Alimuddin; Dirawan, Gufran Darma

    2016-01-01

    The purpose of this research was to determine the effectiveness of a training model for capacity building of women entrepreneurship community-based. Research type approach Research and Development Model, which refers to the model of development research that developed by Romiszowki (1996) combined with a model of development Sugiono (2011) it was…

  5. Selection of low-variance expressed Malus x domestica (apple) genes for use as quantitative PCR reference genes (housekeepers)

    USDA-ARS?s Scientific Manuscript database

    To accurately measure gene expression using PCR-based approaches, there is the need for reference genes that have low variance in expression (housekeeping genes) to normalise the data for RNA quantity and quality. For non-model species such as Malus x domestica (apples), previously, the selection of...

  6. An Examination of Pay Facets and Referent Groups for Assessing Pay Satisfaction of Male Elementary School Principals

    ERIC Educational Resources Information Center

    Young, I. Phillip; Young, Karen Holsey; Okhremtchouk, Irina; Castaneda, Jose Moreno

    2009-01-01

    Pay satisfaction was assessed according to different facets (pay level, benefits, pay structure, and pay raises) and potential referent groups (teachers and elementary school principals) for a random sample of male elementary school principals. A structural model approach was used that considers facets of the pay process, potential others as…

  7. COMPUTATIONAL MODELING OF SIGNALING PATHWAYS MEDIATING CELL CYCLE AND APOPTOTIC RESPONSES TO IONIZING RADIATION MEDIATED DNA DAMAGE

    EPA Science Inventory

    Demonstrated of the use of a computational systems biology approach to model dose response relationships. Also discussed how the biologically motivated dose response models have only limited reference to the underlying molecular level. Discussed the integration of Computational S...

  8. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  9. DSLM Instructional Approach to Conceptual Change Involving Thermal Expansion.

    ERIC Educational Resources Information Center

    She, Hsiao-Ching

    2003-01-01

    Examines the process of student conceptual change regarding thermal expansion using the Dual Situated Learning Model (DSLM) as an instructional approach. Indicates that DSLM promotes conceptual change and holds great potential to facilitate the process through classroom instruction at all levels. (Contains 38 references.) (Author/NB)

  10. Reference set design for relational modeling of fuzzy systems

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    One of the keys to the successful relational modeling of fuzzy systems is the proper design of fuzzy reference sets. This has been discussed throughout the literature. In the frame of modeling a stochastic system, we analyze the problem numerically. First, we briefly describe the relational model and present the performance of the modeling in the most trivial case: the reference sets are triangle shaped. Next, we present a known fuzzy reference set generator algorithm (FRSGA) which is based on the fuzzy c-means (Fc-M) clustering algorithm. In the second section of this chapter we improve the previous FRSGA by adding a constraint to the Fc-M algorithm (modified Fc-M or MFc-M): two cluster centers are forced to coincide with the domain limits. This is needed to obtain properly shaped extreme linguistic reference values. We apply this algorithm to uniformly discretized domains of the variables involved. The fuzziness of the reference sets produced by both Fc-M and MFc-M is determined by a parameter, which in our experiments is modified iteratively. Each time, a new model is created and its performance analyzed. For certain algorithm parameter values both of these two algorithms have shortcomings. To eliminate the drawbacks of these two approaches, we develop a completely new generator algorithm for reference sets which we call Polyline. This algorithm and its performance are described in the last section. In all three cases, the modeling is performed for a variety of operators used in the inference engine and two defuzzification methods. Therefore our results depend neither on the system model order nor the experimental setup.

  11. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  12. A mixture approach to the acoustic properties of a macroscopically inhomogeneous porous aluminum in the equivalent fluid approximation.

    PubMed

    Sacristan, C J; Dupont, T; Sicot, O; Leclaire, P; Verdière, K; Panneton, R; Gong, X L

    2016-10-01

    The acoustic properties of an air-saturated macroscopically inhomogeneous aluminum foam in the equivalent fluid approximation are studied. A reference sample built by forcing a highly compressible melamine foam with conical shape inside a constant diameter rigid tube is studied first. In this process, a radial compression varying with depth is applied. With the help of an assumption on the compressed pore geometry, properties of the reference sample can be modelled everywhere in the thickness and it is possible to use the classical transfer matrix method as theoretical reference. In the mixture approach, the material is viewed as a mixture of two known materials placed in a patchwork configuration and with proportions of each varying with depth. The properties are derived from the use of a mixing law. For the reference sample, the classical transfer matrix method is used to validate the experimental results. These results are used to validate the mixture approach. The mixture approach is then used to characterize a porous aluminium for which only the properties of the external faces are known. A porosity profile is needed and is obtained from the simulated annealing optimization process.

  13. SU-E-T-459: Dosimetric Consequences of Rotated Elliptical Proton Spots in Modeling In-Air Proton Fluence for Calculating Doses in Water of Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matysiak, W; Yeung, D; Hsi, W

    2014-06-01

    Purpose: We present a study of dosimetric consequences on doses in water in modeling in-air proton fluence independently along principle axes for rotated elliptical spots. Methods: Phase-space parameters for modeling in-air fluence are the position sigma for the spatial distribution, the angle sigma for the angular distribution, and the correlation between position and angle distributions. Proton spots of the McLaren proton therapy system were measured at five locations near the isocenter for the energies of 180 MeV and 250 MeV. An elongated elliptical spot rotated with respect to the principle axes was observed for the 180 MeV, while a circular-likemore » spot was observed for the 250 MeV. In the first approach, the phase-space parameters were derived in the principle axes without rotation. In the second approach, the phase space parameters were derived in the reference frame with axes rotated to coincide with the major axes of the elliptical spot. Monte-Carlo simulations with derived phase-space parameters using both approaches to tally doses in water were performed and analyzed. Results: For the rotated elliptical 180 MeV spots, the position sigmas were 3.6 mm and 3.2 mm in principle axes, but were 4.3 mm and 2.0 mm when the reference frame was rotated. Measured spots fitted poorly the uncorrelated 2D Gaussian, but the quality of fit was significantly improved after the reference frame was rotated. As a Result, phase space parameters in the rotated frame were more appropriate for modeling in-air proton fluence of 180 MeV protons. Considerable differences were observed in Monte Carlo simulated dose distributions in water with phase-space parameters obtained with the two approaches. Conclusion: For rotated elliptical proton spots, phase-space parameters obtained in the rotated reference frame are better for modeling in-air proton fluence, and can be introduced into treatment planning systems.« less

  14. Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals

    ERIC Educational Resources Information Center

    Kara, Yusuf; Kamata, Akihito

    2017-01-01

    A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…

  15. Reference Data Layers for Earth and Environmental Science: History, Frameworks, Science Needs, Approaches, and New Technologies

    NASA Astrophysics Data System (ADS)

    Lenhardt, W. C.

    2015-12-01

    Global Mapping Project, Web-enabled Landsat Data (WELD), International Satellite Land Surface Climatology Project (ISLSCP), hydrology, solid earth dynamics, sedimentary geology, climate modeling, integrated assessments and so on all have needs for or have worked to develop consistently integrated data layers for Earth and environmental science. This paper will present an overview of an abstract notion of data layers of this types, what we are referring to as reference data layers for Earth and environmental science, highlight some historical examples, and delve into new approaches. The concept of reference data layers in this context combines data availability, cyberinfrastructure and data science, as well as domain science drivers. We argue that current advances in cyberinfrastructure such as iPython notebooks and integrated science processing environments such as iPlant's Discovery Environment coupled with vast arrays of new data sources warrant another look at the how to create, maintain, and provide reference data layers. The goal is to provide a context for understanding science needs for reference data layers to conduct their research. In addition, to the topics described above this presentation will also outline some of the challenges to and present some ideas for new approaches to addressing these needs. Promoting the idea of reference data layers is relevant to a number of existing related activities such as EarthCube, RDA, ESIP, the nascent NSF Regional Big Data Innovation Hubs and others.

  16. Solving multi-objective optimization problems in conservation with the reference point method

    PubMed Central

    Dujardin, Yann; Chadès, Iadine

    2018-01-01

    Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650

  17. Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico

    NASA Astrophysics Data System (ADS)

    Rodriguez, A.

    2017-08-01

    In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.

  18. Emergence of self and other in perception and action: an event-control approach.

    PubMed

    Jordan, J Scott

    2003-12-01

    The present paper analyzes the regularities referred to via the concept 'self.' This is important, for cognitive science traditionally models the self as a cognitive mediator between perceptual inputs and behavioral outputs. This leads to the assertion that the self causes action. Recent findings in social psychology indicate this is not the case and, as a consequence, certain cognitive scientists model the self as being epiphenomenal. In contrast, the present paper proposes an alternative approach (i.e., the event-control approach) that is based on recently discovered regularities between perception and action. Specifically, these regularities indicate that perception and action planning utilize common neural resources. This leads to a coupling of perception, planning, and action in which the first two constitute aspects of a single system (i.e., the distal-event system) that is able to pre-specify and detect distal events. This distal-event system is then coupled with action (i.e., effector-control systems) in a constraining, as opposed to 'causal' manner. This model has implications for how we conceptualize the manner in which one infers the intentions of another, anticipates the intentions of another, and possibly even experiences another. In conclusion, it is argued that it may be possible to map the concept 'self' onto the regularities referred to in the event-control model, not in order to reify 'the self' as a causal mechanism, but to demonstrate its status as a useful concept that refers to regularities that are part of the natural order.

  19. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  20. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  1. Assessing FAO-56 dual crop coefficients using eddy covariance flux partitioning

    USDA-ARS?s Scientific Manuscript database

    Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, determi...

  2. Organizational Effectiveness in Libraries: A Review and Some Suggestions.

    ERIC Educational Resources Information Center

    Aversa, Elizabeth

    1981-01-01

    Reviews some approaches to organizational effectiveness suggested by organizational theorists, reports on the applications of these theories in libraries, develops some hypotheses regarding the assessment of performance in libraries, and describes a model which synthesizes some of the approaches. A 52-item reference list is attached. (Author/JL)

  3. KALREF—A Kalman filter and time series approach to the International Terrestrial Reference Frame realization

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.

    2015-05-01

    The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.

  4. Estimating Air-Manganese Exposures in Two Ohio Towns ...

    EPA Pesticide Factsheets

    Manganese (Mn), a nutrient required for normal metabolic function, is also a persistent air pollutant and a known neurotoxin at high concentrations. Elevated exposures can result in a number of motor and cognitive deficits. Quantifying chronic personal exposures in residential populations studied by environmental epidemiologists can be time-consuming and expensive. We developed an approach for quantifying chronic exposures for two towns (Marietta and East Liverpool, Ohio) with elevated air Mn concentrations (air-Mn) related to ambient emissions from industrial processes. This was accomplished through the use of measured and modeled data in the communities studied. A novel approach was developed because one of the facilities lacked emissions data for the purposes of modeling. A unit emission rate was assumed over the surface area of both source facilities, and offsite concentrations at receptor residences and air monitoring sites were estimated with the American Meteorological Society/Environmental Protection Agency Regulatory Model (AERMOD). Ratios of all modeled receptor points were created, and a long-running air monitor was identified as a reference location. All ratios were normalized to the reference location. Long-term averages at all residential receptor points were calculated using modeled ratios and data from the reference monitoring location. Modeled five-year average air-Mn exposures ranged from 0.03-1.61 µg/m3 in Marietta and 0.01-6.32 µg/m3 in E

  5. Vectorial model for guided-mode resonance gratings

    NASA Astrophysics Data System (ADS)

    Fehrembach, A.-L.; Gralak, B.; Sentenac, A.

    2018-04-01

    We propose a self-consistent vectorial method, based on a Green's function technique, to describe the resonances that appear in guided-mode resonance gratings. The model provides intuitive expressions of the reflectivity and transmittivity matrices of the structure, involving coupling integrals between the modes of a planar reference structure and radiative modes. When one mode is excited, the diffracted field for a suitable polarization can be written as the sum of a resonant and a nonresonant term, thus extending the intuitive approach used to explain the Fano shape of the resonance in scalar configurations. When two modes are excited, we derive a physical analysis in a configuration which requires a vectorial approach. We provide numerical validations of our model. From a technical point of view, we show how the Green's tensor of our planar reference structure can be expressed as two scalar Green's functions, and how to deal with the singularity of the Green's tensor.

  6. Introduction to the special issue: parsimony and redundancy in models of language.

    PubMed

    Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian

    2013-09-01

    One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;

  7. A reference model for space data system interconnection services

    NASA Astrophysics Data System (ADS)

    Pietras, John; Theis, Gerhard

    1993-03-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  8. A reference model for space data system interconnection services

    NASA Technical Reports Server (NTRS)

    Pietras, John; Theis, Gerhard

    1993-01-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  9. A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.

    2018-04-01

    A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).

  10. Polarizable Force Fields and Polarizable Continuum Model: A Fluctuating Charges/PCM Approach. 1. Theory and Implementation.

    PubMed

    Lipparini, Filippo; Barone, Vincenzo

    2011-11-08

    We present a combined fluctuating charges-polarizable continuum model approach to describe molecules in solution. Both static and dynamic approaches are discussed: analytical first and second derivatives are shown as well as an extended lagrangian for molecular dynamics simluations. In particular, we use the polarizable continuum model to provide nonperiodic boundary conditions for molecular dynamics simulations of aqueous solutions. The extended lagrangian method is extensively discussed, with specific reference to the fluctuating charge model, from a numerical point of view by means of several examples, and a rationalization of the behavior found is presented. Several prototypical applications are shown, especially regarding solvation of ions and polar molecules in water.

  11. Defining Chlorophyll-a Reference Conditions in European Lakes

    PubMed Central

    Alves, Maria Helena; Argillier, Christine; van den Berg, Marcel; Buzzi, Fabio; Hoehn, Eberhard; de Hoyos, Caridad; Karottki, Ivan; Laplace-Treyture, Christophe; Solheim, Anne Lyche; Ortiz-Casas, José; Ott, Ingmar; Phillips, Geoff; Pilke, Ansa; Pádua, João; Remec-Rekar, Spela; Riedmüller, Ursula; Schaumburg, Jochen; Serrano, Maria Luisa; Soszka, Hanna; Tierney, Deirdre; Urbanič, Gorazd; Wolfram, Georg

    2010-01-01

    The concept of “reference conditions” describes the benchmark against which current conditions are compared when assessing the status of water bodies. In this paper we focus on the establishment of reference conditions for European lakes according to a phytoplankton biomass indicator—the concentration of chlorophyll-a. A mostly spatial approach (selection of existing lakes with no or minor human impact) was used to set the reference conditions for chlorophyll-a values, supplemented by historical data, paleolimnological investigations and modelling. The work resulted in definition of reference conditions and the boundary between “high” and “good” status for 15 main lake types and five ecoregions of Europe: Alpine, Atlantic, Central/Baltic, Mediterranean, and Northern. Additionally, empirical models were developed for estimating site-specific reference chlorophyll-a concentrations from a set of potential predictor variables. The results were recently formulated into the EU legislation, marking the first attempt in international water policy to move from chemical quality standards to ecological quality targets. PMID:20401659

  12. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  13. Visual guidance of mobile platforms

    NASA Astrophysics Data System (ADS)

    Blissett, Rodney J.

    1993-12-01

    Two systems are described and results presented demonstrating aspects of real-time visual guidance of autonomous mobile platforms. The first approach incorporates prior knowledge in the form of rigid geometrical models linking visual references within the environment. The second approach is based on a continuous synthesis of information extracted from image tokens to generate a coarse-grained world model, from which potential obstacles are inferred. The use of these techniques in workplace applications is discussed.

  14. Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition

    DTIC Science & Technology

    2006-04-05

    classes. In this study we evaluate the effectiveness of one of these formulations that was developed by Crammer and Singer [9], which leads to...significantly more complex model can be learned by directly applying the Crammer -Singer multiclass formulation on the outputs of the binary classifiers...will refer to this as the Crammer -Singer (CS) model. Comparing the scaling approach to the Crammer -Singer approach we can see that the Crammer -Singer

  15. SELECTING DISCRIMINANT FUNCTION MODELS FOR PREDICTING THE EXPECTED RICHNESS OF AQUATIC MACROINVERTEBRATES

    EPA Science Inventory

    1. The predictive modelling approach to bioassessment estimates the macroinvertebrate assemblage expected at a stream site if it were in a minimally disturbed reference condition. The difference between expected and observed assemblages then measures the departure of the site fro...

  16. A novel semi-transductive learning framework for efficient atypicality detection in chest radiographs

    NASA Astrophysics Data System (ADS)

    Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.

    2012-03-01

    Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit model. This can produce better classification performance, but at a much higher computational cost. Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling. Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high computational cost. This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive approach provides both effective and efficient detection of atypical regions within a set of chest radiographs previously labeled by Mayo Clinic expert thoracic radiologists.

  17. Moving towards ecosystem-based fisheries management: Options for parameterizing multi-species biological reference points

    NASA Astrophysics Data System (ADS)

    Moffitt, Elizabeth A.; Punt, André E.; Holsman, Kirstin; Aydin, Kerim Y.; Ianelli, James N.; Ortiz, Ivonne

    2016-12-01

    Multi-species models can improve our understanding of the effects of fishing so that it is possible to make informed and transparent decisions regarding fishery impacts. Broad application of multi-species assessment models to support ecosystem-based fisheries management (EBFM) requires the development and testing of multi-species biological reference points (MBRPs) for use in harvest-control rules. We outline and contrast several possible MBRPs that range from those that can be readily used in current frameworks to those belonging to a broader EBFM context. We demonstrate each of the possible MBRPs using a simple two species model, motivated by walleye pollock (Gadus chalcogrammus) and Pacific cod (Gadus macrocephalus) in the eastern Bering Sea, to illustrate differences among methods. The MBRPs we outline each differ in how they approach the multiple, potentially conflicting management objectives and trade-offs of EBFM. These options for MBRPs allow multi-species models to be readily adapted for EBFM across a diversity of management mandates and approaches.

  18. The 'Geographic Emission Benchmark' model: a baseline approach to measuring emissions associated with deforestation and degradation.

    PubMed

    Kim, Oh Seok; Newell, Joshua P

    2015-10-01

    This paper proposes a new land-change model, the Geographic Emission Benchmark (GEB), as an approach to quantify land-cover changes associated with deforestation and forest degradation. The GEB is designed to determine 'baseline' activity data for reference levels. Unlike other models that forecast business-as-usual future deforestation, the GEB internally (1) characterizes 'forest' and 'deforestation' with minimal processing and ground-truthing and (2) identifies 'deforestation hotspots' using open-source spatial methods to estimate regional rates of deforestation. The GEB also characterizes forest degradation and identifies leakage belts. This paper compares the accuracy of GEB with GEOMOD, a popular land-change model used in the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) Program. Using a case study of the Chinese tropics for comparison, GEB's projection is more accurate than GEOMOD's, as measured by Figure of Merit. Thus, the GEB produces baseline activity data that are moderately accurate for the setting of reference levels.

  19. Quality control for quantitative PCR based on amplification compatibility test.

    PubMed

    Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W

    2010-04-01

    Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.

  20. A Blended Learning Approach to Teaching Project Management: A Model for Active Participation and Involvement--Insights from Norway

    ERIC Educational Resources Information Center

    Hussein, Bassam A.

    2015-01-01

    The paper demonstrates and evaluates the effectiveness of a blended learning approach to create a meaningful learning environment. We use the term blended learning approach in this paper to refer to the use of multiple or hybrid instructional methods that emphasize the role of learners as contributors to the learning process rather than recipients…

  1. Fuzzy model-based servo and model following control for nonlinear systems.

    PubMed

    Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O

    2009-12-01

    This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.

  2. Modelling Greenland Outlet Glaciers

    NASA Technical Reports Server (NTRS)

    vanderVeen, Cornelis; Abdalati, Waleed (Technical Monitor)

    2001-01-01

    The objective of this project was to develop simple yet realistic models of Greenland outlet glaciers to better understand ongoing changes and to identify possible causes for these changes. Several approaches can be taken to evaluate the interaction between climate forcing and ice dynamics, and the consequent ice-sheet response, which may involve changes in flow style. To evaluate the icesheet response to mass-balance forcing, Van der Veen (Journal of Geophysical Research, in press) makes the assumption that this response can be considered a perturbation on the reference state and may be evaluated separately from how this reference state evolves over time. Mass-balance forcing has an immediate effect on the ice sheet. Initially, the rate of thickness change as compared to the reference state equals the perturbation in snowfall or ablation. If the forcing persists, the ice sheet responds dynamically, adjusting the rate at which ice is evacuated from the interior to the margins, to achieve a new equilibrium. For large ice sheets, this dynamic adjustment may last for thousands of years, with the magnitude of change decreasing steadily over time as a new equilibrium is approached. This response can be described using kinematic wave theory. This theory, modified to pertain to Greenland drainage basins, was used to evaluate possible ice-sheet responses to perturbations in surface mass balance. The reference state is defined based on measurements along the central flowline of Petermann Glacier in north-west Greenland, and perturbations on this state considered. The advantage of this approach is that the particulars of the dynamical flow regime need not be explicitly known but are incorporated through the parameterization of the reference ice flux or longitudinal velocity profile. The results of the kinematic wave model indicate that significant rates of thickness change can occur immediately after the prescribed change in surface mass balance but adjustments in flow rapidly diminish these rates to a few cm/yr at most. The time scale for adjustment is of the order of a thousand years or so.

  3. Potential for Inclusion of Information Encountering within Information Literacy Models

    ERIC Educational Resources Information Center

    Erdelez, Sanda; Basic, Josipa; Levitov, Deborah D.

    2011-01-01

    Introduction: Information encountering (finding information while searching for some other information), is a type of opportunistic discovery of information that complements purposeful approaches to finding information. The motivation for this paper was to determine if the current models of information literacy instruction refer to information…

  4. Ecosystem approach to fisheries: Exploring environmental and trophic effects on Maximum Sustainable Yield (MSY) reference point estimates

    PubMed Central

    Kumar, Rajeev; Pitcher, Tony J.; Varkey, Divya A.

    2017-01-01

    We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY) reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE) ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1) environmental conditions alter species productivity and (2) fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed. PMID:28957387

  5. Development of an Assessment Model for Sustainable Supply Chain Management in Batik Industry

    NASA Astrophysics Data System (ADS)

    Mubiena, G. F.; Ma’ruf, A.

    2018-03-01

    This research proposes a dynamic assessment model for sustainable supply chain management in batik industry. The proposed model identifies the dynamic relationship between economic aspect, environment aspect and social aspect. The economic aspect refers to the supply chain operation reference model. The environment aspect uses carbon emissions and liquid waste as the attribute assessment, while the social aspect focus on employee’s welfare. Lean manufacturing concept was implemented as an alternative approach to sustainability. The simulation result shows that the average of sustainability score for 5 years increased from 65,3% to 70%. Future experiments will be conducted on design improvements to reach the company target on sustainability score.

  6. Word learning emerges from the interaction of online referent selection and slow associative learning.

    PubMed

    McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K

    2012-10-01

    Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  7. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    USGS Publications Warehouse

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  8. Relations between water physico-chemistry and benthic algal communities in a northern Canadian watershed: defining reference conditions using multiple descriptors of community structure.

    PubMed

    Thomas, Kathryn E; Hall, Roland I; Scrimgeour, Garry J

    2015-09-01

    Defining reference conditions is central to identifying environmental effects of anthropogenic activities. Using a watershed approach, we quantified reference conditions for benthic algal communities and their relations to physico-chemical conditions in rivers in the South Nahanni River watershed, NWT, Canada, in 2008 and 2009. We also compared the ability of three descriptors that vary in terms of analytical costs to define algal community structure based on relative abundances of (i) all algal taxa, (ii) only diatom taxa, and (iii) photosynthetic pigments. Ordination analyses showed that variance in algal community structure was strongly related to gradients in environmental variables describing water physico-chemistry, stream habitats, and sub-watershed structure. Water physico-chemistry and local watershed-scale descriptors differed significantly between algal communities from sites in the Selwyn Mountain ecoregion compared to sites in the Nahanni-Hyland ecoregions. Distinct differences in algal community types between ecoregions were apparent irrespective of whether algal community structure was defined using all algal taxa, diatom taxa, or photosynthetic pigments. Two algal community types were highly predictable using environmental variables, a core consideration in the development of Reference Condition Approach (RCA) models. These results suggest that assessments of environmental impacts could be completed using RCA models for each ecoregion. We suggest that use of algal pigments, a high through-put analysis, is a promising alternative compared to more labor-intensive and costly taxonomic approaches for defining algal community structure.

  9. Feature weighting using particle swarm optimization for learning vector quantization classifier

    NASA Astrophysics Data System (ADS)

    Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias

    2018-03-01

    This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.

  10. Recent advances in QM/MM free energy calculations using reference potentials.

    PubMed

    Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L

    2015-05-01

    Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.

  11. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    PubMed

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  12. Strategic development of a multivariate calibration model for the uniformity testing of tablets by transmission NIR analysis.

    PubMed

    Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T

    2015-05-01

    The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.

  13. A color prediction model for imagery analysis

    NASA Technical Reports Server (NTRS)

    Skaley, J. E.; Fisher, J. R.; Hardy, E. E.

    1977-01-01

    A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.

  14. Detailed clinical models: representing knowledge, data and semantics in healthcare information technology.

    PubMed

    Goossen, William T F

    2014-07-01

    This paper will present an overview of the developmental effort in harmonizing clinical knowledge modeling using the Detailed Clinical Models (DCMs), and will explain how it can contribute to the preservation of Electronic Health Records (EHR) data. Clinical knowledge modeling is vital for the management and preservation of EHR and data. Such modeling provides common data elements and terminology binding with the intention of capturing and managing clinical information over time and location independent from technology. Any EHR data exchange without an agreed clinical knowledge modeling will potentially result in loss of information. Many attempts exist from the past to model clinical knowledge for the benefits of semantic interoperability using standardized data representation and common terminologies. The objective of each project is similar with respect to consistent representation of clinical data, using standardized terminologies, and an overall logical approach. However, the conceptual, logical, and the technical expressions are quite different in one clinical knowledge modeling approach versus another. There currently are synergies under the Clinical Information Modeling Initiative (CIMI) in order to create a harmonized reference model for clinical knowledge models. The goal for the CIMI is to create a reference model and formalisms based on for instance the DCM (ISO/TS 13972), among other work. A global repository of DCMs may potentially be established in the future.

  15. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  16. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  17. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  18. Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor

    NASA Astrophysics Data System (ADS)

    Pustovetov, M. Yu

    2018-03-01

    This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.

  19. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    PubMed

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  20. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Weniger, Christoph; Calore, Francesca

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  1. Evaluating Innovation and Navigating Unseen Boundaries: Systems, Processes and People

    ERIC Educational Resources Information Center

    Fleet, Alma; De Gioia, Katey; Madden, Lorraine; Semann, Anthony

    2018-01-01

    This paper illustrates an evaluation model emerging from Australian research. With reference to a range of contexts, its usefulness is demonstrated through application to two professional development initiatives designed to improve continuity of learning in the context of the transition to school. The model reconceptualises approaches to…

  2. Cost Accounting and Analysis for University Libraries

    ERIC Educational Resources Information Center

    Leimkuhler, Ferdinand F.; Cooper, Michael D.

    1971-01-01

    The approach to library planning studied in this paper is the use of accounting models to measure library costs and implement program budgets. A cost-flow model for a university library is developed and tested with historical data from the General Library at the University of California, Berkeley. (4 references) (Author)

  3. Student Assistance Programs: New Approaches for Reducing Adolescent Substance Abuse.

    ERIC Educational Resources Information Center

    Moore, David D.; Forster, Jerald R.

    1993-01-01

    Describes school-based Student Assistance Programs (SAPs), which are designed to reduce adolescents' substance abuse. Notes that SAPs, modeled after Employee Assistance Programs in workplace, are identifying, assessing, referring, and managing cases of substance-abusing students. Sees adoption of SAP model as accelerating in response to growing…

  4. Implementation of a School-Wide Approach to Critical Thinking Instruction.

    ERIC Educational Resources Information Center

    Kassem, Cherrie L.

    2000-01-01

    To improve students' critical-thinking skills, an interdisciplinary team of educators collaborated with a specialist. The result: a new model for infusing thinking-skills instruction. This paper describes the change process, the CRTA model's evolution, derivation of its acronym, and early qualitative results. (Contains 31 references.) (MLH)

  5. Perturbational treatment of spin-orbit coupling for generally applicable high-level multi-reference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Sebastian; Marquetand, Philipp; González, Leticia

    2014-08-21

    An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less

  6. De novo assembly of the transcriptome of the non-model plant Streptocarpus rexii employing a novel heuristic to recover locus-specific transcript clusters.

    PubMed

    Chiara, Matteo; Horner, David S; Spada, Alberto

    2013-01-01

    De novo transcriptome characterization from Next Generation Sequencing data has become an important approach in the study of non-model plants. Despite notable advances in the assembly of short reads, the clustering of transcripts into unigene-like (locus-specific) clusters remains a somewhat neglected subject. Indeed, closely related paralogous transcripts are often merged into single clusters by current approaches. Here, a novel heuristic method for locus-specific clustering is compared to that implemented in the de novo assembler Oases, using the same initial transcript collections, derived from Arabidopsis thaliana and the developmental model Streptocarpus rexii. We show that the proposed approach improves cluster specificity in the A. thaliana dataset for which the reference genome is available. Furthermore, for the S. rexii data our filtered transcript collection matches a larger number of distinct annotated loci in reference genomes than the Oases set, while containing a reduced overall number of loci. A detailed discussion of advantages and limitations of our approach in processing de novo transcriptome reconstructions is presented. The proposed method should be widely applicable to other organisms, irrespective of the transcript assembly method employed. The S. rexii transcriptome is available as a sophisticated and augmented publicly available online database.

  7. The transport of drug in fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir

    2016-07-01

    The topic of the review article [1] is the derivation of a multiscale paradigm for the modeling of fibrosis. Firstly, the biological process of the physiological and pathological fibrosis including therapeutical actions is reviewed. Fibrosis can be a consequence of tissue damage, infections and autoimmune diseases, foreign material, tumors. Some questions regarding the pathogenesis, progression and possible regression of fibrosis are lacking. At each scale of observation, different theoretical tools coming from computational, mathematical and physical biology have been proposed. However a complete framework that takes into account the different mechanisms occurring at different scales is still missing. Therefore with the main aim to define a multiscale approach for the modeling of fibrosis, the authors of [1] have presented different top-down and bottom-up approaches that have been developed in the literature. Specifically, their description refers to models for fibrosis diseases based on ordinary and partial differential equation, agents [2], thermostatted kinetic theory [3-5], coarse-grained structures [6-8] and constitutive laws for fibrous collagen networks [9]. A critical analysis has been addressed for all frameworks discussed in the paper. Open problems and future research directions referring to both biological and modeling insight of fibrosis are presented. The paper concludes with the ambitious aim of a multiscale model.

  8. New Mechanistic Models of Long Term Evolution of Microstructure and Mechanical Properties of Nickel Based Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex

    The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less

  9. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Benchmark of Dynamic Electron Correlation Models for Seniority-Zero Wave Functions and Their Application to Thermochemistry.

    PubMed

    Boguslawski, Katharina; Tecmer, Paweł

    2017-12-12

    Wave functions restricted to electron-pair states are promising models to describe static/nondynamic electron correlation effects encountered, for instance, in bond-dissociation processes and transition-metal and actinide chemistry. To reach spectroscopic accuracy, however, the missing dynamic electron correlation effects that cannot be described by electron-pair states need to be included a posteriori. In this Article, we extend the previously presented perturbation theory models with an Antisymmetric Product of 1-reference orbital Geminal (AP1roG) reference function that allows us to describe both static/nondynamic and dynamic electron correlation effects. Specifically, our perturbation theory models combine a diagonal and off-diagonal zero-order Hamiltonian, a single-reference and multireference dual state, and different excitation operators used to construct the projection manifold. We benchmark all proposed models as well as an a posteriori Linearized Coupled Cluster correction on top of AP1roG against CR-CC(2,3) reference data for reaction energies of several closed-shell molecules that are extrapolated to the basis set limit. Moreover, we test the performance of our new methods for multiple bond breaking processes in the homonuclear N 2 , C 2 , and F 2 dimers as well as the heteronuclear BN, CO, and CN + dimers against MRCI-SD, MRCI-SD+Q, and CR-CC(2,3) reference data. Our numerical results indicate that the best performance is obtained from a Linearized Coupled Cluster correction as well as second-order perturbation theory corrections employing a diagonal and off-diagonal zero-order Hamiltonian and a single-determinant dual state. These dynamic corrections on top of AP1roG provide substantial improvements for binding energies and spectroscopic properties obtained with the AP1roG approach, while allowing us to approach chemical accuracy for reaction energies involving closed-shell species.

  11. An Implicit Model Development Process for Bounding External, Seemingly Intangible/Non-Quantifiable Factors

    DTIC Science & Technology

    2017-06-01

    This research expands the modeling and simulation (M and S) body of knowledge through the development of an Implicit Model Development Process (IMDP...When augmented to traditional Model Development Processes (MDP), the IMDP enables the development of models that can address a broader array of...where a broader, more holistic approach of defining a models referent is achieved. Next, the IMDP codifies the process for implementing the improved model

  12. Intergrative health care method based on combined complementary medical practices: rehabilitative acupuncture, homeopathy and chiropractic.

    PubMed

    Rodríguez-van Lier, María Esperanza; Simón, Luis Manuel Hernández; Gómez, Rosa Estela López; Escalante, Ignacio Peón

    2014-01-01

    There are various models of health care, such as the epidemiological, psychosocial, sociological, economic, systemic of Neuman, cognitive medicine or ecological, ayurvedic, supraparadigmatic among others. All of them are seeking to combine one or more elements to integrate a model of health care. The article presents a systemic approach to health care with complementary medicines such as rehabilitative acupuncture, homeopathy and chiropractic through the application of a method of holistic care and integrated approach. There was a participatory action research in January 2012 to January 2013, with a comprehensive approach in 64 patients using the clinical method. We included the environmental aspects, biological, emotional, and behavioral to identify, recognize and integrate the form of manifestation of the disease. Later, it was ordered in a coherent way the etiologic factors, precipitating factors and identified the vulnerability of the patients as well as the structural alterations, classifying them in immediate, mediate and late. Referred to the three disciplines: rehabilitative acupuncture, homeopathy and chiropractic to be seen doing references and against-references between them, evaluating the current state of health and each meeting by noting the clinical and behavioral changes submitted and thus the area of attention to which would be forwarded to continue their treatment. 64 patients rotated by the 3 areas taking an average of 30 meetings with rehabilitative acupuncture, 12 with homeopathy and 10 with chiropractic. The changes were submitted clinical attitudinal, behavioral, clinical and organic. The model of care was multifaceted and interdisciplinary with a therapeutic approach of individualization and a holistic view to carry out a comprehensive diagnosis and provide quality health care to the population.

  13. Developing primary care in Hong Kong: evidence into practice and the development of reference frameworks.

    PubMed

    Griffiths, Sian M; Lee, Jeff P M

    2012-10-01

    Enhancing primary care is one of the proposals put forward in the Healthcare Reform Consultation Document "Your Health, Your Life" issued in March 2008. In 2009, the Working Group on Primary Care, chaired by the Secretary for Food and Health, recommended the development of age-group and disease-specific primary care conceptual models and reference frameworks. Drawing on international experience and best evidence, the Task Force on Conceptual Model and Preventive Protocols of the Working Group on Primary Care has developed two reference frameworks for the management of two common chronic diseases in Hong Kong, namely diabetes and hypertension, in primary care settings. Adopting a population approach for the prevention and control of diabetes and hypertension across the life course, the reference frameworks aim to provide evidence-based and appropriate recommendations for the provision of continuing and comprehensive care for patients with chronic diseases in the community.

  14. Putting the Horse Back in Front of the Cart: Using Visions and Decisions about High-Quality Learning Experiences to Drive Course Design

    ERIC Educational Resources Information Center

    Allen, Deborah; Tanner, Kimberly

    2007-01-01

    This article discusses a systematic approach to designing significant learning experiences, often referred to as the "backward design process," which has been popularized by Wiggins and McTighe (1998) and is included as a central feature of L. Dee Fink's model for integrated course design (Fink, 2003). The process is referred to as backward…

  15. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  16. An X-Band Radar Terrain Feature Detection Method for Low-Altitude SVS Operations and Calibration Using LiDAR

    NASA Technical Reports Server (NTRS)

    Young, Steve; UijtdeHaag, Maarten; Campbell, Jacob

    2004-01-01

    To enable safe use of Synthetic Vision Systems at low altitudes, real-time range-to-terrain measurements may be required to ensure the integrity of terrain models stored in the system. This paper reviews and extends previous work describing the application of x-band radar to terrain model integrity monitoring. A method of terrain feature extraction and a transformation of the features to a common reference domain are proposed. Expected error distributions for the extracted features are required to establish appropriate thresholds whereby a consistency-checking function can trigger an alert. A calibration-based approach is presented that can be used to obtain these distributions. To verify the approach, NASA's DC-8 airborne science platform was used to collect data from two mapping sensors. An Airborne Laser Terrain Mapping (ALTM) sensor was installed in the cargo bay of the DC-8. After processing, the ALTM produced a reference terrain model with a vertical accuracy of less than one meter. Also installed was a commercial-off-the-shelf x-band radar in the nose radome of the DC-8. Although primarily designed to measure precipitation, the radar also provides estimates of terrain reflectivity at low altitudes. Using the ALTM data as the reference, errors in features extracted from the radar are estimated. A method to estimate errors in features extracted from the terrain model is also presented.

  17. Evidence-based ergonomics: a model and conceptual structure proposal.

    PubMed

    Silveira, Dierci Marcio

    2012-01-01

    In Human Factors and Ergonomics Science (HFES), it is difficult to identify what is the best approach to tackle the workplace and systems design problems which needs to be solved, and it has been also advocated as transdisciplinary and multidisciplinary the issue of "How to solve the human factors and ergonomics problems that are identified?". The proposition on this study is to combine the theoretical approach for Sustainability Science, the Taxonomy of the Human Factors and Ergonomics (HFE) discipline and the framework for Evidence-Based Medicine in an attempt to be applied in Human Factors and Ergonomics. Applications of ontologies are known in the field of medical research and computer science. By scrutinizing the key requirements for the HFES structuring of knowledge, it was designed a reference model, First, it was identified the important requirements for HFES Concept structuring, as regarded by Meister. Second, it was developed an evidence-based ergonomics framework as a reference model composed of six levels based on these requirements. Third, it was devised a mapping tool using linguistic resources to translate human work, systems environment and the complexities inherent to their hierarchical relationships to support future development at Level 2 of the reference model and for meeting the two major challenges for HFES, namely, identifying what problems should be addressed in HFE as an Autonomous Science itself and proposing solutions by integrating concepts and methods applied in HFES for those problems.

  18. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  19. Adaptive Performance Seeking Control Using Fuzzy Model Reference Learning Control and Positive Gradient Control

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    1997-01-01

    Performance Seeking Control attempts to find the operating condition that will generate optimal performance and control the plant at that operating condition. In this paper a nonlinear multivariable Adaptive Performance Seeking Control (APSC) methodology will be developed and it will be demonstrated on a nonlinear system. The APSC is comprised of the Positive Gradient Control (PGC) and the Fuzzy Model Reference Learning Control (FMRLC). The PGC computes the positive gradients of the desired performance function with respect to the control inputs in order to drive the plant set points to the operating point that will produce optimal performance. The PGC approach will be derived in this paper. The feedback control of the plant is performed by the FMRLC. For the FMRLC, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for the effective tuning of the FMRLC controller.

  20. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    NASA Astrophysics Data System (ADS)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  1. Model reference tracking control of an aircraft: a robust adaptive approach

    NASA Astrophysics Data System (ADS)

    Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan

    2017-05-01

    This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.

  2. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  3. 14 CFR 23.73 - Reference landing approach speed.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Reference landing approach speed. 23.73... Reference landing approach speed. (a) For normal, utility, and acrobatic category reciprocating engine-powered airplanes of 6,000 pounds or less maximum weight, the reference landing approach speed, VREF, must...

  4. 14 CFR 23.73 - Reference landing approach speed.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Reference landing approach speed. 23.73... Reference landing approach speed. (a) For normal, utility, and acrobatic category reciprocating engine-powered airplanes of 6,000 pounds or less maximum weight, the reference landing approach speed, VREF, must...

  5. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use expert knowledge for only model building and constraining, but also in the phase of landscape classification.

  6. Some Approaches to Development and the Indian Dilemma.

    ERIC Educational Resources Information Center

    Pickett, Lloyd C.

    In reference to the American Indian's problem of maintaining his values while trying to participate in the economy of the larger society, the role of the change agent was explored via review of some 15 development models derived from the economic, political, sociological, and applied sciences. Included in the review were models which approached…

  7. A Mental Models Approach to Assessing Public Understanding of Zika Virus, Guatemala.

    PubMed

    Southwell, Brian G; Ray, Sarah E; Vazquez, Natasha N; Ligorria, Tere; Kelly, Bridget J

    2018-05-01

    Mental models are cognitive representations of phenomena that can constrain efforts to reduce infectious disease. In a study of Zika virus awareness in Guatemala, many participants referred to experiences with other mosquitoborne diseases during discussions of Zika virus. These results highlight the importance of past experiences for Zika virus understanding.

  8. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  9. Pseudo-conformer models for linear molecules: Joint treatment of spectroscopic, electron diffraction and ab initio data for the C3O2 molecule

    NASA Astrophysics Data System (ADS)

    Tarasov, Yury I.; Kochikov, Igor V.

    2018-06-01

    Dynamic analysis of the molecules with large-amplitude motions (LAM) based on the pseudo-conformer approach has been successfully applied to various molecules. Floppy linear molecules present a special class of molecular structures that possess a pair of conjugate LAM coordinates but allow one-dimensional treatment. In this paper, previously developed treatment for the semirigid molecules is applied to the carbon suboxide molecule. This molecule characterized by the extremely large CCC bending has been thoroughly investigated by spectroscopic and ab initio methods. However, the earlier electron diffraction investigations were performed within a static approach, obtaining thermally averaged parameters. In this paper we apply a procedure aimed at obtaining the short list of self-consistent reference geometry parameters of a molecule, while all thermally averaged parameters are calculated based on reference geometry, relaxation dependencies and quadratic and cubic force constants. We show that such a model satisfactorily describes available electron diffraction evidence with various QC bending potential energy functions when r.m.s. CCC angle is in the interval 151 ± 2°. This leads to a self-consistent molecular model satisfying spectroscopic and GED data. The parameters for linear reference geometry have been defined as re(CO) = 1.161(2) Å and re(CC) = 1.273(2) Å.

  10. RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Full Scale DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto

    2013-04-10

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study taking advantage of the symmetry of the DOE RM1 geometry, only half of the geometry is modeled using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in both the near and far wake of the device at the desired operating conditions. The results of these simulations showed good agreement to the only publicly available numerical simulation of the device done in the NREL. Please see the attached paper.

  11. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  12. Body composition indices of a load-capacity model: gender- and BMI-specific reference curves.

    PubMed

    Siervo, Mario; Prado, Carla M; Mire, Emily; Broyles, Stephanie; Wells, Jonathan C K; Heymsfield, Steven; Katzmarzyk, Peter T

    2015-05-01

    Fat mass (FM) and fat-free mass (FFM) are frequently measured to define body composition phenotypes. The load-capacity model integrates the effects of both FM and FFM to improve disease-risk prediction. We aimed to derive age-, gender- and BMI-specific reference curves of load-capacity model indices in an adult population (≥18 years). Cross-sectional study. Dual-energy X-ray absorptiometry was used to measure FM, FFM, appendicular skeletal muscle mass (ASM) and truncal fat mass (TrFM). Two metabolic load-capacity indices were calculated: ratio of FM (kg) to FFM (kg) and ratio of TrFM (kg) to ASM (kg). Age-standardised reference curves, stratified by gender and BMI (<25.0 kg/m2, 25.0-29.9 kg/m2, ≥30.0 kg/m2), were constructed using an LMS approach. Percentiles of the reference curves were 5th, 15th, 25th, 50th, 75th, 85th and 95th. Secondary analysis of data from the 1999-2004 National Health and Nutrition Examination Survey (NHANES). The population included 6580 females and 6656 males. The unweighted proportions of obesity in males and females were 25.5 % and 34.7 %, respectively. The average values of both FM:FFM and TrFM:ASM were greater in female and obese subjects. Gender and BMI influenced the shape of the association of age with FM:FFM and TrFM:ASM, as a curvilinear relationship was observed in female and obese subjects. Menopause appeared to modify the steepness of the reference curves of both indices. This is a novel risk-stratification approach integrating the effects of high adiposity and low muscle mass which may be particularly useful to identify cases of sarcopenic obesity and improve disease-risk prediction.

  13. Word learning emerges from the interaction of online referent selection and slow associative learning

    PubMed Central

    McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.

    2013-01-01

    Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations between words and referents. 5) Finally, the model illustrates that learning and referent selection/word recognition, though logically distinct, can be deeply and subtly related as phenomena like speed of processing and mutual exclusivity may derive in part from the way learning shapes the system. As a whole, this suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development and processing in children. PMID:23088341

  14. A reduced-order nonlinear sliding mode observer for vehicle slip angle and tyre forces

    NASA Astrophysics Data System (ADS)

    Chen, Yuhang; Ji, Yunfeng; Guo, Konghui

    2014-12-01

    In this paper, a reduced-order sliding mode observer (RO-SMO) is developed for vehicle state estimation. Several improvements are achieved in this paper. First, the reference model accuracy is improved by considering vehicle load transfers and using a precise nonlinear tyre model 'UniTire'. Second, without the reference model accuracy degraded, the computing burden of the state observer is decreased by a reduced-order approach. Third, nonlinear system damping is integrated into the SMO to speed convergence and reduce chattering. The proposed RO-SMO is evaluated through simulation and experiments based on an in-wheel motor electric vehicle. The results show that the proposed observer accurately predicts the vehicle states.

  15. A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors

    NASA Astrophysics Data System (ADS)

    Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia

    This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.

  16. Conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information.

    PubMed

    Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D

    2007-04-01

    Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.

  17. Conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265

  18. Recent advances in QM/MM free energy calculations using reference potentials☆

    PubMed Central

    Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.

    2015-01-01

    Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480

  19. A comparison of three approaches for simulating fine-scale surface winds in support of wildland fire management: Part I. Model formulation and comparison against measurements

    Treesearch

    Jason M. Forthofer; Bret W. Butler; Natalie S. Wagenbrenner

    2014-01-01

    For this study three types of wind models have been defined for simulating surface wind flow in support of wildland fire management: (1) a uniform wind field (typically acquired from coarse-resolution (,4 km) weather service forecast models); (2) a newly developed mass-conserving model and (3) a newly developed mass and momentumconserving model (referred to as the...

  20. The process of recovery from bipolar I disorder: a qualitative analysis of personal accounts in relation to an integrative cognitive model.

    PubMed

    Mansell, Warren; Powell, Seth; Pedley, Rebecca; Thomas, Nia; Jones, Sarah Amelia

    2010-06-01

    This study explored the process of recovery from bipolar I disorder from a phenomenological and cognitive perspective. A semi-structured interview was coded and analysed using interpretative phenomenological analysis. Eleven individuals over the age of 30 with a history of bipolar disorder were selected on the basis of having remained free from relapse, and without hospitalization for at least 2 years, as confirmed by a diagnostic interview (Standardised Interview for DSM-IV; SCID-I). This arbitrary and equivocal criterion for 'recovery' provided an objective method of defining the sample for the study. The analysis revealed two overarching themes formed from four themes each. Ambivalent approaches referred to approaches that participants felt had both positive and negative consequences: avoidance of mania, taking medication, prior illness versus current wellness, and sense of identity following diagnosis. Helpful approaches referred to approaches that were seen as universally helpful: understanding, life-style fundamentals, social support and companionship, and social change. These themes were then interpreted in the light of the existing literature and an integrative cognitive model of bipolar disorder. Limitations and future research directions are discussed.

  1. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    USGS Publications Warehouse

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  2. Accounting for regional variation in both natural environment and human disturbance to improve performance of multimetric indices of lotic benthic diatoms.

    PubMed

    Tang, Tao; Stevenson, R Jan; Infante, Dana M

    2016-10-15

    Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  4. Integrative Modeling of Electrical Properties of Pacemaker Cardiac Cells

    NASA Astrophysics Data System (ADS)

    Grigoriev, M.; Babich, L.

    2016-06-01

    This work represents modeling of electrical properties of pacemaker (sinus) cardiac cells. Special attention is paid to electrical potential arising from transmembrane current of Na+, K+ and Ca2+ ions. This potential is calculated using the NaCaX model. In this respect, molar concentration of ions in the intercellular space which is calculated on the basis of the GENTEX model is essential. Combined use of two different models allows referring this approach to integrative modeling.

  5. Comparison of different synthetic 5-min rainfall time series on the results of rainfall runoff simulations in urban drainage modelling

    NASA Astrophysics Data System (ADS)

    Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar

    2015-04-01

    The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).

  6. The CMMI Product Suite and International Standards

    DTIC Science & Technology

    2006-07-01

    standards: “2.3 Reference Documents 2.3.1 Applicable ISO /IEC documents, including ISO /IEC 12207 and ISO /IEC 15504.” “3.1 Development User Requirements...related international standards such as ISO 9001:2000, 12207 , 15288 © 2006 by Carnegie Mellon University Page 12 Key Supplements Needed...the Measurement Framework in ISO /IEC 15504; and • the Process Reference Model included in ISO /IEC 12207 . A possible approach has been developed for

  7. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less

  8. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  9. Humidity-corrected Arrhenius equation: The reference condition approach.

    PubMed

    Naveršnik, Klemen; Jurečič, Rok

    2016-03-16

    Accelerated and stress stability data is often used to predict shelf life of pharmaceuticals. Temperature, combined with humidity accelerates chemical decomposition and the Arrhenius equation is used to extrapolate accelerated stability results to long-term stability. Statistical estimation of the humidity-corrected Arrhenius equation is not straightforward due to its non-linearity. A two stage nonlinear fitting approach is used in practice, followed by a prediction stage. We developed a single-stage statistical procedure, called the reference condition approach, which has better statistical properties (less collinearity, direct estimation of uncertainty, narrower prediction interval) and is significantly easier to use, compared to the existing approaches. Our statistical model was populated with data from a 35-day stress stability study on a laboratory batch of vitamin tablets and required mere 30 laboratory assay determinations. The stability prediction agreed well with the actual 24-month long term stability of the product. The approach has high potential to assist product formulation, specification setting and stability statements. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Definition and Proposed Realization of the International Height Reference System (IHRS)

    NASA Astrophysics Data System (ADS)

    Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael

    2017-05-01

    Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.

  11. Development of methods for establishing nutrient criteria in lakes and reservoirs: A review.

    PubMed

    Huo, Shouliang; Ma, Chunzi; Xi, Beidou; Zhang, Yali; Wu, Fengchang; Liu, Hongliang

    2018-05-01

    Nutrient criteria provide a scientific foundation for the comprehensive evaluation, prevention, control and management of water eutrophication. In this review, the literature was examined to systematically evaluate the benefits, drawbacks, and applications of statistical analysis, paleolimnological reconstruction, stressor-response model, and model inference approaches for nutrient criteria determination. The developments and challenges in the determination of nutrient criteria in lakes and reservoirs are presented. Reference lakes can reflect the original states of lakes, but reference sites are often unavailable. Using the paleolimnological reconstruction method, it is often difficult to reconstruct the historical nutrient conditions of shallow lakes in which the sediments are easily disturbed. The model inference approach requires sufficient data to identify the appropriate equations and characterize a waterbody or group of waterbodies, thereby increasing the difficulty of establishing nutrient criteria. The stressor-response model is a potential development direction for nutrient criteria determination, and the mechanisms of stressor-response models should be studied further. Based on studies of the relationships among water ecological criteria, eutrophication, nutrient criteria and plankton, methods for determining nutrient criteria should be closely integrated with water management requirements. Copyright © 2017. Published by Elsevier B.V.

  12. Deep water tsunami simulation at global scale using an elastoacoustic approach

    NASA Astrophysics Data System (ADS)

    Salazar Monroy, E. F.; Ramirez-Guzman, L.; Bielak, J.; Sanchez-Sesma, F. J.

    2017-12-01

    In this work, we present the results for the first stage of a tsunami global simulation project using an elastoacoustic approach. The solid-fluid interaction, which is only valid on a global scale and far distances from the coast, is modelled using a finite element scheme for a 2D geometry. Comparing analytic and numerical solutions, we observe a good fit for a homogeneous domain - with an extension of 20 km - using 15 points per wavelength. Subsequently, we performed 2D realizations taking a section from a global 3D model and projecting the Tohoku-Oki source obtained by the USGS. The 3D Global model uses the ETOPO1 and the Preliminary Reference Earth Model (Dziewonski and Anderson, 1981). We analysed 3 cross sections, defined using DART buoys as a reference for each section (i.e., initial and final profile point). Surface water elevation obtained with this coupling strategy is constrained at low frequencies (0.2 Hz). We expect that this coupling strategy could approximate the model to high frequencies and realistic scenarios considering other geometries (i.e., 3D) and a complete domain (i.e., surface and deep).

  13. Sharing reference data and including cows in the reference population improve genomic predictions in Danish Jersey.

    PubMed

    Su, G; Ma, P; Nielsen, U S; Aamand, G P; Wiggans, G; Guldbrandtsen, B; Lund, M S

    2016-06-01

    Small reference populations limit the accuracy of genomic prediction in numerically small breeds, such like Danish Jersey. The objective of this study was to investigate two approaches to improve genomic prediction by increasing size of reference population in Danish Jersey. The first approach was to include North American Jersey bulls in Danish Jersey reference population. The second was to genotype cows and use them as reference animals. The validation of genomic prediction was carried out on bulls and cows, respectively. In validation on bulls, about 300 Danish bulls (depending on traits) born in 2005 and later were used as validation data, and the reference populations were: (1) about 1050 Danish bulls, (2) about 1050 Danish bulls and about 1150 US bulls. In validation on cows, about 3000 Danish cows from 87 young half-sib families were used as validation data, and the reference populations were: (1) about 1250 Danish bulls, (2) about 1250 Danish bulls and about 1150 US bulls, (3) about 1250 Danish bulls and about 4800 cows, (4) about 1250 Danish bulls, 1150 US bulls and 4800 Danish cows. Genomic best linear unbiased prediction model was used to predict breeding values. De-regressed proofs were used as response variables. In the validation on bulls for eight traits, the joint DK-US bull reference population led to higher reliability of genomic prediction than the DK bull reference population for six traits, but not for fertility and longevity. Averaged over the eight traits, the gain was 3 percentage points. In the validation on cows for six traits (fertility and longevity were not available), the gain from inclusion of US bull in reference population was 6.6 percentage points in average over the six traits, and the gain from inclusion of cows was 8.2 percentage points. However, the gains from cows and US bulls were not accumulative. The total gain of including both US bulls and Danish cows was 10.5 percentage points. The results indicate that sharing reference data and including cows in reference population are efficient approaches to increase reliability of genomic prediction. Therefore, genomic selection is promising for numerically small population.

  14. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    NASA Astrophysics Data System (ADS)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and detect potential model biases with respect to each reference data set. The RCMs are then ranked based on various statistical measures for each indicator and a score matrix is derived to select a subset of RCMs. We show the impact and importance of the reference data set with respect to the resulting climate change signal on the catchment scale.

  15. Selection of reference standard during method development using the analytical hierarchy process.

    PubMed

    Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun

    2015-03-25

    Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Comprehensive Aspectual UML approach to support AspectJ.

    PubMed

    Magableh, Aws; Shukur, Zarina; Ali, Noorazean Mohd

    2014-01-01

    Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a "good design" criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs.

  17. Comprehensive Aspectual UML Approach to Support AspectJ

    PubMed Central

    Magableh, Aws; Shukur, Zarina; Mohd. Ali, Noorazean

    2014-01-01

    Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a “good design” criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs. PMID:25136656

  18. Improving accuracy and power with transfer learning using a meta-analytic database.

    PubMed

    Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand

    2012-01-01

    Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.

  19. Assessing noninferiority in a three-arm trial using the Bayesian approach.

    PubMed

    Ghosh, Pulak; Nathoo, Farouk; Gönen, Mithat; Tiwari, Ram C

    2011-07-10

    Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  1. Discretization-dependent model for weakly connected excitable media

    NASA Astrophysics Data System (ADS)

    Arroyo, Pedro André; Alonso, Sergio; Weber dos Santos, Rodrigo

    2018-03-01

    Pattern formation has been widely observed in extended chemical and biological processes. Although the biochemical systems are highly heterogeneous, homogenized continuum approaches formed by partial differential equations have been employed frequently. Such approaches are usually justified by the difference of scales between the heterogeneities and the characteristic spatial size of the patterns. Under different conditions, for example, under weak coupling, discrete models are more adequate. However, discrete models may be less manageable, for instance, in terms of numerical implementation and mesh generation, than the associated continuum models. Here we study a model to approach discreteness which permits the computer implementation on general unstructured meshes. The model is cast as a partial differential equation but with a parameter that depends not only on heterogeneities sizes, as in the case of quasicontinuum models, but also on the discretization mesh. Therefore, we refer to it as a discretization-dependent model. We validate the approach in a generic excitable media that simulates three different phenomena: the propagation of action membrane potential in cardiac tissue, in myelinated axons of neurons, and concentration waves in chemical microemulsions.

  2. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  3. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator.

    PubMed

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-11-01

    In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.

  4. Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS

    NASA Astrophysics Data System (ADS)

    Zhao, Banghua

    Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.

  5. Second-order sliding mode controller with model reference adaptation for automatic train operation

    NASA Astrophysics Data System (ADS)

    Ganesan, M.; Ezhilarasi, D.; Benni, Jijo

    2017-11-01

    In this paper, a new approach to model reference based adaptive second-order sliding mode control together with adaptive state feedback is presented to control the longitudinal dynamic motion of a high speed train for automatic train operation with the objective of minimal jerk travel by the passengers. The nonlinear dynamic model for the longitudinal motion of the train comprises of a locomotive and coach subsystems is constructed using multiple point-mass model by considering the forces acting on the vehicle. An adaptation scheme using Lyapunov criterion is derived to tune the controller gains by considering a linear, stable reference model that ensures the stability of the system in closed loop. The effectiveness of the controller tracking performance is tested under uncertain passenger load, coupler-draft gear parameters, propulsion resistance coefficients variations and environmental disturbances due to side wind and wet rail conditions. The results demonstrate improved tracking performance of the proposed control scheme with a least jerk under maximum parameter uncertainties when compared to constant gain second-order sliding mode control.

  6. The Usability Analysis of an E-Learning Environment

    ERIC Educational Resources Information Center

    Torun, Fulya; Tekedere, Hakan

    2015-01-01

    In this research, an E-learning environment is developed for the teacher candidates taking the course on Scientific Research Methods. The course contents were adapted to one of the constructivist approach models referred to as 5E, and an expert opinion was received for the compliance of this model. An usability analysis was also performed to…

  7. A Model-Based Architecture Approach to Ship Design Linking Capability Needs to System Solutions

    DTIC Science & Technology

    2012-06-01

    NSSM NATO Sea Sparrow Missile RAM Rolling Airframe Missile CIWS Close-In Weapon System 3D Three Dimensional Ps Probability of Survival PHit ...example effectiveness model. The primary MOP is the inverse of the probability of taking a hit (1- PHit ), which in, this study, will be referred to as

  8. Medical Team Training: Using Simulation as a Teaching Strategy for Group Work

    ERIC Educational Resources Information Center

    Moyer, Michael R.; Brown, Rhonda Douglas

    2011-01-01

    Described is an innovative approach currently being used to inspire group work, specifically a medical team training model, referred to as The Simulation Model, which includes as its major components: (1) Prior Training in Group Work of Medical Team Members; (2) Simulation in Teams or Groups; (3) Multidisciplinary Teamwork; (4) Team Leader…

  9. Liquid Fuels Market Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Defines the objectives of the Liquid Fuels Market Model (LFMM), describes its basic approach, and provides detail on how it works. This report is intended as a reference document for model analysts, users, and the public. This edition of the LFMM reflects changes made to the module over the past two years for the Annual Energy Outlook 2016.

  10. Understanding the Common Elements of Evidence-Based Practice: Misconceptions and Clinical Examples

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Becker, Kimberly D.; Daleiden, Eric L.

    2007-01-01

    In this article, the authors proposed a distillation and matching model (DMM) that describes how evidence-based treatment operations can be conceptualized at a lower order level of analysis than simply by their manuals. Also referred to as the "common elements" approach, this model demonstrates the feasibility of coding and identifying the…

  11. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    ERIC Educational Resources Information Center

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  12. Fables, Fancies and Failures in Cross-Cultural Training.

    ERIC Educational Resources Information Center

    Downs, James F.

    1969-01-01

    Several different approaches have been taken to cross-cultural training in Peace Corps Training programs. Three of these might be referred to as the intellectual model (consisting of lectures on the host country culture), the area simulation model (placing the trainees in a surrounding which in some way resembles the country in which they will be…

  13. Nonlinear dynamic analysis of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Bauchau, Olivier A.; Kang, Nam Kook

    1991-01-01

    Two approaches are developed to analyze the dynamic behavior of flexible multibody systems. In the first approach each body is modeled with a modal methodology in a local non-inertial frame of reference, whereas in the second approach, each body is modeled with a finite element methodology in the inertial frame. In both cases, the interaction among the various elastic bodies is represented by constraint equations. The two approaches were compared for accuracy and efficiency: the first approach is preferable when the nonlinearities are not too strong but it becomes cumbersome and expensive to use when many modes must be used. The second approach is more general and easier to implement but could result in high computation costs for a large system. The constraints should be enforced in a time derivative fashion for better accuracy and stability.

  14. Temperature and solute-transport simulation in streamflow using a Lagrangian reference frame

    USGS Publications Warehouse

    Jobson, Harvey E.

    1980-01-01

    A computer program for simulating one-dimensional, unsteady temperature and solute transport in a river has been developed and documented for general use. The solution approach to the convective-diffusion equation uses a moving reference frame (Lagrangian) which greatly simplifies the mathematics of the solution procedure and dramatically reduces errors caused by numerical dispersion. The model documentation is presented as a series of four programs of increasing complexity. The conservative transport model can be used to route a single conservative substance. The simplified temperature model is used to predict water temperature in rivers when only temperature and windspeed data are available. The complete temperature model is highly accurate but requires rather complete meteorological data. Finally, the 10-parameter model can be used to route as many as 10 interacting constituents through a river reach. (USGS)

  15. A Theorectical Frame of Reference for Rational-Emotive Psychotherapy and Its Application to the Problems of the Under-Achiever.

    ERIC Educational Resources Information Center

    Konietzko, Kurt

    The Rational Emotive Approach centers upon a model in which the human being is seen as a series of systems constantly interacting with others to keep itself functioning. Underlying this approach is the view that it is never the event, but our view of it, which creates the emotional response. Many irrational, culturally structured beliefs cause…

  16. Structure and thermodynamics of a mixture of patchy and spherical colloids: A multi-body association theory with complete reference fluid information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bansal, Artee; Asthagiri, D.; Cox, Kenneth R.

    A mixture of solvent particles with short-range, directional interactions and solute particles with short-range, isotropic interactions that can bond multiple times is of fundamental interest in understanding liquids and colloidal mixtures. Because of multi-body correlations, predicting the structure and thermodynamics of such systems remains a challenge. Earlier Marshall and Chapman [J. Chem. Phys. 139, 104904 (2013)] developed a theory wherein association effects due to interactions multiply the partition function for clustering of particles in a reference hard-sphere system. The multi-body effects are incorporated in the clustering process, which in their work was obtained in the absence of the bulk medium.more » The bulk solvent effects were then modeled approximately within a second order perturbation approach. However, their approach is inadequate at high densities and for large association strengths. Based on the idea that the clustering of solvent in a defined coordination volume around the solute is related to occupancy statistics in that defined coordination volume, we develop an approach to incorporate the complete information about hard-sphere clustering in a bulk solvent at the density of interest. The occupancy probabilities are obtained from enhanced sampling simulations but we also develop a concise parametric form to model these probabilities using the quasichemical theory of solutions. We show that incorporating the complete reference information results in an approach that can predict the bonding state and thermodynamics of the colloidal solute for a wide range of system conditions.« less

  17. Setting nutrient thresholds to support an ecological assessment based on nutrient enrichment, potential primary production and undesirable disturbance.

    PubMed

    Devlin, Michelle; Painting, Suzanne; Best, Mike

    2007-01-01

    The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.

  18. Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses

    PubMed Central

    Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah

    2015-01-01

    Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481

  19. Thermodynamics of mixtures of patchy and spherical colloids of different sizes: A multi-body association theory with complete reference fluid information.

    PubMed

    Bansal, Artee; Valiya Parambathu, Arjun; Asthagiri, D; Cox, Kenneth R; Chapman, Walter G

    2017-04-28

    We present a theory to predict the structure and thermodynamics of mixtures of colloids of different diameters, building on our earlier work [A. Bansal et al., J. Chem. Phys. 145, 074904 (2016)] that considered mixtures with all particles constrained to have the same size. The patchy, solvent particles have short-range directional interactions, while the solute particles have short-range isotropic interactions. The hard-sphere mixture without any association site forms the reference fluid. An important ingredient within the multi-body association theory is the description of clustering of the reference solvent around the reference solute. Here we account for the physical, multi-body clusters of the reference solvent around the reference solute in terms of occupancy statistics in a defined observation volume. These occupancy probabilities are obtained from enhanced sampling simulations, but we also present statistical mechanical models to estimate these probabilities with limited simulation data. Relative to an approach that describes only up to three-body correlations in the reference, incorporating the complete reference information better predicts the bonding state and thermodynamics of the physical solute for a wide range of system conditions. Importantly, analysis of the residual chemical potential of the infinitely dilute solute from molecular simulation and theory shows that whereas the chemical potential is somewhat insensitive to the description of the structure of the reference fluid, the energetic and entropic contributions are not, with the results from the complete reference approach being in better agreement with particle simulations.

  20. Thermodynamics of mixtures of patchy and spherical colloids of different sizes: A multi-body association theory with complete reference fluid information

    NASA Astrophysics Data System (ADS)

    Bansal, Artee; Valiya Parambathu, Arjun; Asthagiri, D.; Cox, Kenneth R.; Chapman, Walter G.

    2017-04-01

    We present a theory to predict the structure and thermodynamics of mixtures of colloids of different diameters, building on our earlier work [A. Bansal et al., J. Chem. Phys. 145, 074904 (2016)] that considered mixtures with all particles constrained to have the same size. The patchy, solvent particles have short-range directional interactions, while the solute particles have short-range isotropic interactions. The hard-sphere mixture without any association site forms the reference fluid. An important ingredient within the multi-body association theory is the description of clustering of the reference solvent around the reference solute. Here we account for the physical, multi-body clusters of the reference solvent around the reference solute in terms of occupancy statistics in a defined observation volume. These occupancy probabilities are obtained from enhanced sampling simulations, but we also present statistical mechanical models to estimate these probabilities with limited simulation data. Relative to an approach that describes only up to three-body correlations in the reference, incorporating the complete reference information better predicts the bonding state and thermodynamics of the physical solute for a wide range of system conditions. Importantly, analysis of the residual chemical potential of the infinitely dilute solute from molecular simulation and theory shows that whereas the chemical potential is somewhat insensitive to the description of the structure of the reference fluid, the energetic and entropic contributions are not, with the results from the complete reference approach being in better agreement with particle simulations.

  1. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  2. Accuracy of Digital vs Conventional Implant Impression Approach: A Three-Dimensional Comparative In Vitro Analysis.

    PubMed

    Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav

    To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.

  3. Assessing statistical differences between parameters estimates in Partial Least Squares path modeling.

    PubMed

    Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten

    2018-01-01

    Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.

  4. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  5. Multimodal Fusion with Reference: Searching for Joint Neuromarkers of Working Memory Deficits in Schizophrenia

    PubMed Central

    Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi

    2017-01-01

    Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547

  6. Tailoring periodical collections to meet institutional needs.

    PubMed Central

    Delman, B S

    1984-01-01

    A system for tailoring journal collections to meet institutional needs is described. The approach is based on the view that reference work and collection development are variant and complementary forms of the same library function; both tasks have as their objective a literature response to information problems. Utilizing the tools and procedures of the reference search in response to a specific collection development problem topic, the author created a model ranked list of relevant journals. Finally, by linking the model to certain operational and environmental factors in three different health care organizations, he tailored the collection to meet the institutions' respective information needs. PMID:6375775

  7. Stochastic modelling of temperatures affecting the in situ performance of a solar-assisted heat pump: The multivariate approach and physical interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loveday, D.L.; Craggs, C.

    Box-Jenkins-based multivariate stochastic modeling is carried out using data recorded from a domestic heating system. The system comprises an air-source heat pump sited in the roof space of a house, solar assistance being provided by the conventional tile roof acting as a radiation absorber. Multivariate models are presented which illustrate the time-dependent relationships between three air temperatures - at external ambient, at entry to, and at exit from, the heat pump evaporator. Using a deterministic modeling approach, physical interpretations are placed on the results of the multivariate technique. It is concluded that the multivariate Box-Jenkins approach is a suitable techniquemore » for building thermal analysis. Application to multivariate Box-Jenkins approach is a suitable technique for building thermal analysis. Application to multivariate model-based control is discussed, with particular reference to building energy management systems. It is further concluded that stochastic modeling of data drawn from a short monitoring period offers a means of retrofitting an advanced model-based control system in existing buildings, which could be used to optimize energy savings. An approach to system simulation is suggested.« less

  8. Combined estimation of kappa and shear-wave velocity profile of the Japanese rock reference

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Edwards, Benjamin; Fäh, Donat

    2013-04-01

    The definition of a common soil or rock reference is a key issue in probabilistic seismic hazard analysis (PSHA), microzonation studies, local site-response analysis and, more generally, when predicted or observed ground motion is compared for sites of different characteristics. A scaling procedure, which accounts for a common reference, is then necessary to avoid bias induced by the differences in the local geology. Nowadays methods requiring the definition of a reference condition generally prescribe the characteristic of a rock reference, calibrated using indirect estimation methods based on geology or on surface proxies. In most cases, a unique average shear-wave velocity value is prescribed (e.g. Vs30 = 800m/s as for class A of the EUROCODE8). Some attempts at defining the whole shape of a reference rock velocity profile have been described, often without a clear physical justification of how such a selection was performed. Moreover, in spite of its relevance in affecting the high-frequency part of the spectrum, the definition of the associated reference attenuation is in most cases missing or, when present, still remains quite uncertain. In this study we propose an approach that is based on the comparison between empirical anelastic amplification functions from spectral modeling of earthquakes and average S-wave velocities computed using the quarter-wavelength approach. The method is an extension of the approach originally proposed by Poggi et al. (2011) for Switzerland, and is here applied to Japan. For the analysis we make use of a selection of 36 stiff-soil and rock sites from the Japanese KiK-net network, for which a measured velocity profile is available. With respect to the previous study, however, we now analyze separately the elastic and anelastic contributions of the estimated empirical amplification. In a first step - which is consistent with the original work - only the elastic part of the amplification spectrum is considered. This procedure allows the retrieval of the shape of the velocity profile that is characterized by no relative amplification within the network. Subsequently, the contribution of intrinsic attenuation is analyzed, disaggregated from the anelastic function by using the frequency independent (and site-dependent) attenuation operator kappa (κ). By comparing the dependency of κ with the quarter-wavelength velocity at selected sites, a frequency-dependent predictive equation is established to model the attenuation characteristics of an arbitrary rock or stiff-soil velocity model, such as the reference model obtained in the first step. The result of this application can be used to model the site-dependent attenuation for any rock and stiff-soil site for which an estimation of the velocity profile or its corresponding quarter-wavelength velocity representation is available. As an additional output of the present study, we also propose a simplified method to estimate kappa from the average velocity estimates over the first 30m (Vs30). We provide an example of such predictions for a range of Vs30 velocities up to 2000m/s.

  9. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    PubMed Central

    2013-01-01

    Background The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Results Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. Conclusions In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. PMID:23442253

  10. Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses

    NASA Astrophysics Data System (ADS)

    Lungsi Sharma, B.

    2016-07-01

    The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.

  11. A One-System Theory Which is Not Propositional.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2009-04-01

    We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.

  12. Simplified models vs. effective field theory approaches in dark matter searches

    NASA Astrophysics Data System (ADS)

    De Simone, Andrea; Jacques, Thomas

    2016-07-01

    In this review we discuss and compare the usage of simplified models and Effective Field Theory (EFT) approaches in dark matter searches. We provide a state of the art description on the subject of EFTs and simplified models, especially in the context of collider searches for dark matter, but also with implications for direct and indirect detection searches, with the aim of constituting a common language for future comparisons between different strategies. The material is presented in a form that is as self-contained as possible, so that it may serve as an introductory review for the newcomer as well as a reference guide for the practitioner.

  13. A Kalman filter approach for the determination of celestial reference frames

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.

  14. Indoor positioning using differential Wi-Fi lateration

    NASA Astrophysics Data System (ADS)

    Retscher, Guenther; Tatschl, Thomas

    2017-12-01

    For Wi-Fi positioning usually location fingerprinting or (tri)lateration are employed whereby the received signal strengths (RSSs) of the surrounding Wi-Fi Access Points (APs) are scanned on the mobile devices and used to perform localization. Within the scope of this study, the position of a mobile user is determined on the basis of lateration. Two new differential approaches are developed and compared to two common models, i.e., the one-slope and multi-wall model, for the conversion of the measured RSS of the Wi-Fi signals into ranges. The two novel methods are termed DWi-Fi as they are derived either from the well-known DGPS or VLBI positioning principles. They make use of a network of reference stations deployed in the area of interest. From continuous RSS observations on these reference stations correction parameters are derived and applied by the user in real-time. This approach leads to a reduced influence of temporal and spatial variations and various propagation effects on the positioning result. In practical use cases conducted in a multi-storey office building with three different smartphones, it is proven that the two DWi-Fi approaches outperform the common models as static positioning yielded to position errors of about 5 m in average under good spatial conditions.

  15. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.

    PubMed

    Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M

    2016-02-01

    To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  16. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  17. An incompressible fluid flow model with mutual information for MR image registration

    NASA Astrophysics Data System (ADS)

    Tsai, Leo; Chang, Herng-Hua

    2013-03-01

    Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.

  18. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  19. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, Kevin

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive featuremore » of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.« less

  20. A cloud-based approach for interoperable electronic health records (EHRs).

    PubMed

    Bahga, Arshdeep; Madisetti, Vijay K

    2013-09-01

    We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.

  1. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

  2. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Anchor Selection Strategies for DIF Analysis: Review, Assessment, and New Approaches

    ERIC Educational Resources Information Center

    Kopf, Julia; Zeileis, Achim; Strobl, Carolin

    2015-01-01

    Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…

  4. Prospective Elementary Teachers' Perceptions of the Processes of Modeling: A Case Study

    ERIC Educational Resources Information Center

    Fazio, Claudio; Di Paola, Benedetto; Guastella, Ivan

    2012-01-01

    In this paper we discuss a study on the approaches to modeling of students of the 4-year elementary school teacher program at the University of Palermo, Italy. The answers to a specially designed questionnaire are analyzed on the basis of an "a priori" analysis made using a general scheme of reference on the epistemology of mathematics…

  5. Deformation integrity monitoring for GNSS positioning services including local, regional and large scale hazard monitoring - the Karlsruhe approach and software(MONIKA)

    NASA Astrophysics Data System (ADS)

    Jaeger, R.

    2007-05-01

    GNSS-positioning services like SAPOS/ascos in Germany and many others in Europe, America and worldwide, usually yield in a short time their interdisciplinary and country-wide use for precise geo-referencing, replacing traditional low order geodetic networks. So it becomes necessary that possible changes of the reference stations' coordinates are detected ad hoc. The GNSS-reference-station MONitoring by the KArlsruhe approach and software (MONIKA) are designed for that task. The developments at Karlsruhe University of Applied Sciences in cooperation with the State Survey of Baden-Württemberg are further motivated by a the official resolution of the German state survey departments' association (Arbeitsgemeinschaft der Vermessungsverwaltungen Deutschland (AdV)) 2006 on coordinate monitoring as a quality-control duty of the GNSS-positioning service provider. The presented approach can - besides the coordinate control of GNSS-positioning services - also be used to set up any GNSS-service for the tasks of an area-wide geodynamical and natural disaster-prevention service. The mathematical model of approach, which enables a multivariate and multi-epochal design approach, is based on the GNSS-observations input of the RINEX-data of the GNSS service, followed by fully automatic processing of baselines and/or session, and a near-online setting up of epoch-state vectors and their covariance-matrices in a rigorous 3D network adjustment. In case of large scale and long-term monitoring situations, geodynamical standard trends (datum-drift, plate-movements etc.) are accordingly considered and included in the mathematical model of MONIKA. The coordinate-based deformation monitoring approach, as third step of the stepwise adjustments, is based on the above epoch-state vectors, and - splitting off geodynamics trends - hereby on a multivariate and multi-epochal congruency testing. So far, that no other information exists, all points are assumed as being stable and congruent reference points. Stations, which a priori assumed as moving - in that way local monitoring areas can be included- are to be monitored and analyzed in reference to the stable reference points. In that way, a high sensitivity for the detection of GNSS station displacements, both for assumed stable points, as well as for a priori moving points, can be achieved. The results for the concept are shown at the example of a monitoring using the MONINKA-software in the 300 x 300 km area of the state of Baden-Württemberg, Germany.

  6. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  7. A view to the future of natural gas and electricity: An integrated modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Wesley J.; Medlock, Kenneth B.; Jani, Aditya

    This paper demonstrates the value of integrating two highly spatially resolved models: the Rice World Gas Trade Model (RWGTM) of the natural gas sector and the Regional Energy Deployment System (ReEDS) model of the U.S. electricity sector. The RWGTM passes electricity-sector natural gas prices to the ReEDS model, while the ReEDS model returns electricity-sector natural gas demand to the RWGTM. The two models successfully converge to a solution under reference scenario conditions. We present electricity-sector and natural gas sector evolution using the integrated models for this reference scenario. This paper demonstrates that the integrated models produced similar national-level results asmore » when running in a stand-alone form, but that regional and state-level results can vary considerably. As we highlight, these regional differences have potentially significant implications for electric sector planners especially in the wake of substantive policy changes for the sector (e.g., the Clean Power Plan).« less

  8. A view to the future of natural gas and electricity: An integrated modeling approach

    DOE PAGES

    Cole, Wesley J.; Medlock, Kenneth B.; Jani, Aditya

    2016-03-17

    This paper demonstrates the value of integrating two highly spatially resolved models: the Rice World Gas Trade Model (RWGTM) of the natural gas sector and the Regional Energy Deployment System (ReEDS) model of the U.S. electricity sector. The RWGTM passes electricity-sector natural gas prices to the ReEDS model, while the ReEDS model returns electricity-sector natural gas demand to the RWGTM. The two models successfully converge to a solution under reference scenario conditions. We present electricity-sector and natural gas sector evolution using the integrated models for this reference scenario. This paper demonstrates that the integrated models produced similar national-level results asmore » when running in a stand-alone form, but that regional and state-level results can vary considerably. As we highlight, these regional differences have potentially significant implications for electric sector planners especially in the wake of substantive policy changes for the sector (e.g., the Clean Power Plan).« less

  9. Survey of Anomaly Detection Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, B

    This survey defines the problem of anomaly detection and provides an overview of existing methods. The methods are categorized into two general classes: generative and discriminative. A generative approach involves building a model that represents the joint distribution of the input features and the output labels of system behavior (e.g., normal or anomalous) then applies the model to formulate a decision rule for detecting anomalies. On the other hand, a discriminative approach aims directly to find the decision rule, with the smallest error rate, that distinguishes between normal and anomalous behavior. For each approach, we will give an overview ofmore » popular techniques and provide references to state-of-the-art applications.« less

  10. Modeling Cross-Situational Word–Referent Learning: Prior Questions

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490

  11. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byerly, Benjamin L.; Stanley, Floyd; Spencer, Khal

    In our study, a certified plutonium metal reference material (CRM 126) with a known production history is examined using analytical methods that are commonly employed in nuclear forensics for provenancing and attribution. Moreover, the measured plutonium isotopic composition and actinide assay are consistent with values reported on the reference material certificate. Model ages from U/Pu and Am/Pu chronometers agree with the documented production timeline. Finally, these results confirm the utility of these analytical methods and highlight the importance of a holistic approach for forensic study of unknown materials.

  13. Outcomes of senior reach gatekeeper referrals: comparison of the Spokane gatekeeper program, Colorado Senior Reach, and Mid-Kansas Senior Outreach.

    PubMed

    Bartsch, David A; Rodgers, Vicki K; Strong, Don

    2013-01-01

    Outcomes of older adults referred for care management and mental health services through the senior reach gatekeeper model of case finding were examined in this study and compared with the Spokane gatekeeper model Colorado Senior Reach and the Mid-Kansas Senior Outreach (MKSO) programs are the two Senior Reach Gatekeeper programs modeled after the Spokane program, employing the same community education and gatekeeper model and with mental health treatment for elderly adults in need of support. The three mature programs were compared on seniors served isolation, and depression ratings. Nontraditional community gatekeepers were trained and referred seniors in need. Findings indicate that individuals served by the two Senior Reach Gatekeeper programs demonstrated significant improvements. Isolation indicators such as social isolation decreased and depression symptoms and suicide ideation also decreased. These findings for two Senior Reach Gatekeeper programs demonstrate that the gatekeeper approach to training community partners worked in referring at-risk seniors in need in meeting their needs, and in having a positive impact on their lives.

  14. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    PubMed

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  15. [Use of theories and models on papers of a Latin-American journal in public health, 2000 to 2004].

    PubMed

    Cabrera Arana, Gustavo Alonso

    2007-12-01

    To characterize frequency and type of use of theories or models on papers of a Latin-American journal in public health between 2000 and 2004. The Revista de Saúde Pública was chosen because of its history of periodic publication without interruption and current impact on the scientific communication of the area. A standard procedure was applied for reading and classifying articles in an arbitrary typology of four levels, according to the depth of the use of models or theoretical references to describe problems or issues, to formulate methods and to discuss results. Of 482 articles included, 421 (87%) were research studies, 42 (9%) reviews or special contributions and 19 (4%) opinion texts or assays . Of 421 research studies, 286 (68%) had a quantitative focus, 110 (26%) qualitative and 25 (6%) mixed. Reference to theories or models is uncommon, only 90 (19%) articles mentioned a theory or model. According to the depth of the use, 29 (6%) were classified as type I, 9 (2%) as type II, 6 (1.3%) were type III and the 46 remaining texts (9.5%) were type IV. Reference to models was nine-fold more frequent than the use of theoretical references. The ideal use, type IV, occurred in one of every ten articles studied. It is of relevance to show theoretical and models frames used when approaching topics, formulating hypothesis, designing methods and discussing findings in papers.

  16. Complex index of refraction estimation from degree of polarization with diffuse scattering consideration.

    PubMed

    Zhan, Hanyu; Voelz, David G; Cho, Sang-Yeon; Xiao, Xifeng

    2015-11-20

    The estimation of the refractive index from optical scattering off a target's surface is an important task for remote sensing applications. Optical polarimetry is an approach that shows promise for refractive index estimation. However, this estimation often relies on polarimetric models that are limited to specular targets involving single surface scattering. Here, an analytic model is developed for the degree of polarization (DOP) associated with reflection from a rough surface that includes the effect of diffuse scattering. A multiplicative factor is derived to account for the diffuse component and evaluation of the model indicates that diffuse scattering can significantly affect the DOP values. The scattering model is used in a new approach for refractive index estimation from a series of DOP values that involves jointly estimating n, k, and ρ(d)with a nonlinear equation solver. The approach is shown to work well with simulation data and additive noise. When applied to laboratory-measured DOP values, the approach produces significantly improved index estimation results relative to reference values.

  17. Predictive simulation of bidirectional Glenn shunt using a hybrid blood vessel model.

    PubMed

    Li, Hao; Leow, Wee Kheng; Chiu, Ing-Sh

    2009-01-01

    This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel's global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produc by the system are similar to real surgical results.

  18. Requirements engineering for cross-sectional information chain models

    PubMed Central

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed. PMID:24199080

  19. Report of the 90-day study on human exploration of the Moon and Mars

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The basic mission sequence to achieve the President's goal is clear: begin with Space Station Freedom in the 1990's, return to the Moon to stay early in the Next century, and then journey to Mars. Five reference approaches are modeled building on past programs and recent studies to reflect wide-ranging strategies that incorporate varied program objectives, schedules, technologies, and resource availabilities. The reference approaches are (1) balance and speed; (2) the earliest possible landing on Mars; (3) reduce logistics from Earth; (4) schedule adapted to Space Station Freedom; and (5) reduced scales. The study and programmatic assessment have shown that the Human Exploration Initiative is indeed a feasible approach to achieving the President's goals. Several reasonable alternatives exist, but a long-range commitment and significant resources will be required. However, the value of the program and the benefits to the Nation are immeasurable.

  20. The effects of topography on magma chamber deformation models: Application to Mt. Etna and radar interferometry

    NASA Astrophysics Data System (ADS)

    Williams, Charles A.; Wadge, Geoff

    We have used a three-dimensional elastic finite element model to examine the effects of topography on the surface deformation predicted by models of magma chamber deflation. We used the topography of Mt. Etna to control the geometry of our model, and compared the finite element results to those predicted by an analytical solution for a pressurized sphere in an elastic half-space. Topography has a significant effect on the predicted surface deformation for both displacement profiles and synthetic interferograms. Not only are the predicted displacement magnitudes significantly different, but also the map-view patterns of displacement. It is possible to match the predicted displacement magnitudes fairly well by adjusting the elevation of a reference surface; however, the horizontal pattern of deformation is still significantly different. Thus, inversions based on constant-elevation reference surfaces may not properly estimate the horizontal position of a magma chamber. We have investigated an approach where the elevation of the reference surface varies for each computation point, corresponding to topography. For vertical displacements and tilts this method provides a good fit to the finite element results, and thus may form the basis for an inversion scheme. For radial displacements, a constant reference elevation provides a better fit to the numerical results.

  1. Application of two passive strategies on the load mitigation of large offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Shirzadeh, Rasoul; Kühn, Martin

    2016-09-01

    This study presents the numerical results of two passive strategies to reduce the support structure loads of a large offshore wind turbine. In the first approach, an omnidirectional tuned mass damper is designed and implemented in the tower top to alleviate the structural vibrations. In the second approach, a viscous fluid damper model which is diagonally attached to the tower at two points is developed. Aeroelastic simulations are performed for the offshore 10MW INNWIND.EU reference wind turbine mounted on a jacket structure. Lifetime damage equivalent loads are evaluated at the tower base and compared with those for the reference wind turbine. The results show that the integrated design can extend the lifetime of the support structure.

  2. An unsupervised approach for measuring myocardial perfusion in MR image sequences

    NASA Astrophysics Data System (ADS)

    Discher, Antoine; Rougon, Nicolas; Preteux, Francoise

    2005-08-01

    Quantitatively assessing myocardial perfusion is a key issue for the diagnosis, therapeutic planning and patient follow-up of cardio-vascular diseases. To this end, perfusion MRI (p-MRI) has emerged as a valuable clinical investigation tool thanks to its ability of dynamically imaging the first pass of a contrast bolus in the framework of stress/rest exams. However, reliable techniques for automatically computing regional first pass curves from 2D short-axis cardiac p-MRI sequences remain to be elaborated. We address this problem and develop an unsupervised four-step approach comprising: (i) a coarse spatio-temporal segmentation step, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast; (ii) a model-based variational segmentation step of the reference frame, yielding a bi-ventricular partition of the heart into left ventricle, right ventricle and myocardium components; (iii) a respiratory/cardiac motion artifacts compensation step using a novel region-driven intensity-based non rigid registration technique, allowing to elastically propagate the reference bi-ventricular segmentation over the whole sequence; (iv) a measurement step, delivering first-pass curves over each region of a segmental model of the myocardium. The performance of this approach is assessed over a database of 15 normal and pathological subjects, and compared with perfusion measurements delivered by a MRI manufacturer software package based on manual delineations by a medical expert.

  3. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator

    PubMed Central

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-01-01

    Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850

  4. Emotional valence and contextual affordances flexibly shape approach-avoidance movements

    PubMed Central

    Saraiva, Ana Carolina; Schüür, Friederike; Bestmann, Sven

    2013-01-01

    Behavior is influenced by the emotional content—or valence—of stimuli in our environment. Positive stimuli facilitate approach, whereas negative stimuli facilitate defensive actions such as avoidance (flight) and attack (fight). Facilitation of approach or avoidance movements may also be influenced by whether it is the self that moves relative to a stimulus (self-reference) or the stimulus that moves relative to the self (object-reference), adding flexibility and context-dependence to behavior. Alternatively, facilitation of approach avoidance movements may happen in a pre-defined and muscle-specific way, whereby arm flexion is faster to approach positive (e.g., flexing the arm brings a stimulus closer) and arm extension faster to avoid negative stimuli (e.g., extending the arm moves the stimulus away). While this allows for relatively fast responses, it may compromise the flexibility offered by contextual influences. Here we asked under which conditions approach-avoidance actions are influenced by contextual factors (i.e., reference-frame). We manipulated the reference-frame in which actions occurred by asking participants to move a symbolic manikin (representing the self) toward or away from a positive or negative stimulus, and move a stimulus toward or away from the manikin. We also controlled for the type of movements used to approach or avoid in each reference. We show that the reference-frame influences approach-avoidance actions to emotional stimuli, but additionally we find muscle-specificity for negative stimuli in self-reference contexts. We speculate this muscle-specificity may be a fast and adaptive response to threatening stimuli. Our results confirm that approach-avoidance behavior is flexible and reference-frame dependent, but can be muscle-specific depending on the context and valence of the stimulus. Reference-frame and stimulus-evaluation are key factors in guiding approach-avoidance behavior toward emotional stimuli in our environment. PMID:24379794

  5. Multiscale Simulations of Protein Landscapes: Using Coarse Grained Models as Reference Potentials to Full Explicit Models

    PubMed Central

    Messer, Benjamin M.; Roca, Maite; Chu, Zhen T.; Vicatos, Spyridon; Kilshtain, Alexandra Vardi; Warshel, Arieh

    2009-01-01

    Evaluating the free energy landscape of proteins and the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of simplified coarse grained (CG) folding models offers an effective way of sampling the landscape but such a treatment, however, may not give the correct description of the effect of the actual protein residues. A general way around this problem that has been put forward in our early work (Fan et al, Theor Chem Acc (1999) 103:77-80) uses the CG model as a reference potential for free energy calculations of different properties of the explicit model. This method is refined and extended here, focusing on improving the electrostatic treatment and on demonstrating key applications. This application includes: evaluation of changes of folding energy upon mutations, calculations of transition states binding free energies (which are crucial for rational enzyme design), evaluation of catalytic landscape and simulation of the time dependent responses to pH changes. Furthermore, the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins is discussed. PMID:20052756

  6. Hamiltonian methods of modeling and control of AC microgrids with spinning machines and inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Ronald C.; Weaver, Wayne W.; Robinett, Rush D.

    This study presents a novel approach to the modeling and control of AC microgrids that contain spinning machines, power electronic inverters and energy storage devices. The inverters in the system can adjust their frequencies and power angles very quickly, so the modeling focuses on establishing a common reference frequency and angle in the microgrid based on the spinning machines. From this dynamic model, nonlinear Hamiltonian surface shaping and power flow control method is applied and shown to stabilize. From this approach the energy flow in the system is used to show the energy storage device requirements and limitations for themore » system. This paper first describes the model for a single bus AC microgrid with a Hamiltonian control, then extends this model and control to a more general class of multiple bus AC microgrids. Finally, simulation results demonstrate the efficacy of the approach in stabilizing and optimization of the microgrid.« less

  7. Hamiltonian methods of modeling and control of AC microgrids with spinning machines and inverters

    DOE PAGES

    Matthews, Ronald C.; Weaver, Wayne W.; Robinett, Rush D.; ...

    2017-12-22

    This study presents a novel approach to the modeling and control of AC microgrids that contain spinning machines, power electronic inverters and energy storage devices. The inverters in the system can adjust their frequencies and power angles very quickly, so the modeling focuses on establishing a common reference frequency and angle in the microgrid based on the spinning machines. From this dynamic model, nonlinear Hamiltonian surface shaping and power flow control method is applied and shown to stabilize. From this approach the energy flow in the system is used to show the energy storage device requirements and limitations for themore » system. This paper first describes the model for a single bus AC microgrid with a Hamiltonian control, then extends this model and control to a more general class of multiple bus AC microgrids. Finally, simulation results demonstrate the efficacy of the approach in stabilizing and optimization of the microgrid.« less

  8. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    NASA Astrophysics Data System (ADS)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  9. Getting Started with TQM.

    ERIC Educational Resources Information Center

    Freeston, Kenneth R.

    1992-01-01

    Tired of disjointed programs and projects, the staff of Newtown (Connecticut) Public Schools developed their own Success-Oriented School Model, blending elements of Deming's 14 points with William Glasser's approach to quality. To obtain quality outcomes means stressing continuous improvement and staying close to the customer. (six references)…

  10. Characterization of the rainbow trout transcriptome using Sanger and 454-Pyrosequencing approaches

    USDA-ARS?s Scientific Manuscript database

    BACKGROUND: Rainbow trout is an important fish species for aquaculture and a model species for research investigations associated with carcinogenesis, comparative immunology, toxicology and the evolutionary biology. However, to date there is no genome reference sequence to facilitate the development...

  11. Characterization of the rainbow trout transcriptome using Sanger and 454-pyrosequencing approaches

    USDA-ARS?s Scientific Manuscript database

    Background: Rainbow trout is an important fish for aquaculture and recreational fisheries and serves as a model species for research investigations associated with carcinogenesis, comparative immunology, toxicology and the evolutionary biology. However, to date there is no genome reference sequence...

  12. Outdoor Experiences and Sustainability

    ERIC Educational Resources Information Center

    Prince, Heather E.

    2017-01-01

    Positive outdoor teaching and learning experiences and sound pedagogical approaches undoubtedly have contributed towards an understanding of environmental sustainability but it is not always clear how, and to what extent, education can translate into action. This article argues, with reference to social learning theory, that role modelling,…

  13. Comparison of SMOS and SMAP Soil Moisture Retrieval Approaches Using Tower-based Radiometer Data over a Vineyard Field

    NASA Technical Reports Server (NTRS)

    Miernecki, Maciej; Wigneron, Jean-Pierre; Lopez-Baeza, Ernesto; Kerr, Yann; DeJeu, Richard; DeLannoy, Gabielle J. M.; Jackson, Tom J.; O'Neill, Peggy E.; Shwank, Mike; Moran, Roberto Fernandez; hide

    2014-01-01

    The objective of this study was to compare several approaches to soil moisture (SM) retrieval using L-band microwave radiometry. The comparison was based on a brightness temperature (TB) data set acquired since 2010 by the L-band radiometer ELBARA-II over a vineyard field at the Valencia Anchor Station (VAS) site. ELBARA-II, provided by the European Space Agency (ESA) within the scientific program of the SMOS (Soil Moisture and Ocean Salinity) mission, measures multiangular TB data at horizontal and vertical polarization for a range of incidence angles (30-60). Based on a three year data set (2010-2012), several SM retrieval approaches developed for spaceborne missions including AMSR-E (Advanced Microwave Scanning Radiometer for EOS), SMAP (Soil Moisture Active Passive) and SMOS were compared. The approaches include: the Single Channel Algorithm (SCA) for horizontal (SCA-H) and vertical (SCA-V) polarizations, the Dual Channel Algorithm (DCA), the Land Parameter Retrieval Model (LPRM) and two simplified approaches based on statistical regressions (referred to as 'Mattar' and 'Saleh'). Time series of vegetation indices required for three of the algorithms (SCA-H, SCA-V and Mattar) were obtained from MODIS observations. The SM retrievals were evaluated against reference SM values estimated from a multiangular 2-Parameter inversion approach. The results obtained with the current base line algorithms developed for SMAP (SCA-H and -V) are in very good agreement with the reference SM data set derived from the multi-angular observations (R2 around 0.90, RMSE varying between 0.035 and 0.056 m3m3 for several retrieval configurations). This result showed that, provided the relationship between vegetation optical depth and a remotely-sensed vegetation index can be calibrated, the SCA algorithms can provide results very close to those obtained from multi-angular observations in this study area. The approaches based on statistical regressions provided similar results and the best accuracy was obtained with the Saleh methods based on either bi-angular or bipolarization observations (R2 around 0.93, RMSE around 0.035 m3m3). The LPRM and DCA algorithms were found to be slightly less successful in retrieving the 'reference' SM time series (R2 around 0.75, RMSE around 0.055 m3m3). However, the two above approaches have the great advantage of not requiring any model calibrations previous to the SM retrievals.

  14. A logical approach to semantic interoperability in healthcare.

    PubMed

    Bird, Linda; Brooks, Colleen; Cheong, Yu Chye; Tun, Nwe Ni

    2011-01-01

    Singapore is in the process of rolling out a number of national e-health initiatives, including the National Electronic Health Record (NEHR). A critical enabler in the journey towards semantic interoperability is a Logical Information Model (LIM) that harmonises the semantics of the information structure with the terminology. The Singapore LIM uses a combination of international standards, including ISO 13606-1 (a reference model for electronic health record communication), ISO 21090 (healthcare datatypes), and SNOMED CT (healthcare terminology). The LIM is accompanied by a logical design approach, used to generate interoperability artifacts, and incorporates mechanisms for achieving unidirectional and bidirectional semantic interoperability.

  15. Convergence and Divergence in a Multi-Model Ensemble of Terrestrial Ecosystem Models in North America

    NASA Astrophysics Data System (ADS)

    Dungan, J. L.; Wang, W.; Hashimoto, H.; Michaelis, A.; Milesi, C.; Ichii, K.; Nemani, R. R.

    2009-12-01

    In support of NACP, we are conducting an ensemble modeling exercise using the Terrestrial Observation and Prediction System (TOPS) to evaluate uncertainties among ecosystem models, satellite datasets, and in-situ measurements. The models used in the experiment include public-domain versions of Biome-BGC, LPJ, TOPS-BGC, and CASA, driven by a consistent set of climate fields for North America at 8km resolution and daily/monthly time steps over the period of 1982-2006. The reference datasets include MODIS Gross Primary Production (GPP) and Net Primary Production (NPP) products, Fluxnet measurements, and other observational data. The simulation results and the reference datasets are consistently processed and systematically compared in the climate (temperature-precipitation) space; in particular, an alternative to the Taylor diagram is developed to facilitate model-data intercomparisons in multi-dimensional space. The key findings of this study indicate that: the simulated GPP/NPP fluxes are in general agreement with observations over forests, but are biased low (underestimated) over non-forest types; large uncertainties of biomass and soil carbon stocks are found among the models (and reference datasets), often induced by seemingly “small” differences in model parameters and implementation details; the simulated Net Ecosystem Production (NEP) mainly responds to non-respiratory disturbances (e.g. fire) in the models and therefore is difficult to compare with flux data; and the seasonality and interannual variability of NEP varies significantly among models and reference datasets. These findings highlight the problem inherent in relying on only one modeling approach to map surface carbon fluxes and emphasize the pressing necessity of expanded and enhanced monitoring systems to narrow critical structural and parametrical uncertainties among ecosystem models.

  16. Rate determination from vector observations

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.

    1993-01-01

    Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.

  17. Pay-to-participate funding schemes in human cell and tissue clinical studies.

    PubMed

    Sipp, Douglas

    2012-11-01

    Funding support for clinical research is traditionally obtained from any of several sources, including government agencies, industry, not-for-profit foundations, philanthropies and charitable and advocacy organizations. In recent history, there have also been a limited number of cases in which clinical research programs were established in which funding was provided directly by patients in turn for the ability to participate as nonrandomized subjects. This approach to clinical research funding, which I refer to here as the 'pay-to-participate' model, has been both criticized and rationalized on ethical grounds, with reference to its implications for issues, including equipoise, therapeutic misconception, justice, autonomy and risk-benefit balance. Discussion of the scientific implications of this funding scheme, however, has been more limited. I will briefly review the history of the pay-to-participate model in the context of experimental cell and tissue treatments to date and highlight the many ethical and, particularly, scientific challenges that unavoidably confound this approach to the funding and conduct of clinical research.

  18. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  19. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  20. Business process architectures: overview, comparison and framework

    NASA Astrophysics Data System (ADS)

    Dijkman, Remco; Vanderfeesten, Irene; Reijers, Hajo A.

    2016-02-01

    With the uptake of business process modelling in practice, the demand grows for guidelines that lead to consistent and integrated collections of process models. The notion of a business process architecture has been explicitly proposed to address this. This paper provides an overview of the prevailing approaches to design a business process architecture. Furthermore, it includes evaluations of the usability and use of the identified approaches. Finally, it presents a framework for business process architecture design that can be used to develop a concrete architecture. The use and usability were evaluated in two ways. First, a survey was conducted among 39 practitioners, in which the opinion of the practitioners on the use and usefulness of the approaches was evaluated. Second, four case studies were conducted, in which process architectures from practice were analysed to determine the approaches or elements of approaches that were used in their design. Both evaluations showed that practitioners have a preference for using approaches that are based on reference models and approaches that are based on the identification of business functions or business objects. At the same time, the evaluations showed that practitioners use these approaches in combination, rather than selecting a single approach.

  1. Low energy stage study. Volume 2: Requirements and candidate propulsion modes. [orbital launching of shuttle payloads

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A payload mission model covering 129 launches, was examined and compared against the space transportation system shuttle standard orbit inclinations and a shuttle launch site implementation schedule. Based on this examination and comparison, a set of six reference missions were defined in terms of spacecraft weight and velocity requirements to deliver the payload from a 296 km circular Shuttle standard orbit to the spacecraft's planned orbit. Payload characteristics and requirements representative of the model payloads included in the regime bounded by each of the six reference missions were determined. A set of launch cost envelopes were developed and defined based on the characteristics of existing/planned Shuttle upper stages and expendable launch systems in terms of launch cost and velocity delivered. These six reference missions were used to define the requirements for the candidate propulsion modes which were developed and screened to determine the propulsion approaches for conceptual design.

  2. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  3. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  4. Aerocapture Performance Analysis for a Neptune-Triton Exploration Mission

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Westhelle, Carlos H.; Masciarelli, James P.

    2004-01-01

    A systems analysis has been conducted for a Neptune-Triton Exploration Mission in which aerocapture is used to capture a spacecraft at Neptune. Aerocapture uses aerodynamic drag instead of propulsion to decelerate from the interplanetary approach trajectory to a captured orbit during a single pass through the atmosphere. After capture, propulsion is used to move the spacecraft from the initial captured orbit to the desired science orbit. A preliminary assessment identified that a spacecraft with a lift to drag ratio of 0.8 was required for aerocapture. Performance analyses of the 0.8 L/D vehicle were performed using a high fidelity flight simulation within a Monte Carlo executive to determine mission success statistics. The simulation was the Program to Optimize Simulated Trajectories (POST) modified to include Neptune specific atmospheric and planet models, spacecraft aerodynamic characteristics, and interplanetary trajectory models. To these were added autonomous guidance and pseudo flight controller models. The Monte Carlo analyses incorporated approach trajectory delivery errors, aerodynamic characteristics uncertainties, and atmospheric density variations. Monte Carlo analyses were performed for a reference set of uncertainties and sets of uncertainties modified to produce increased and reduced atmospheric variability. For the reference uncertainties, the 0.8 L/D flatbottom ellipsled vehicle achieves 100% successful capture and has a 99.87 probability of attaining the science orbit with a 360 m/s V budget for apoapsis and periapsis adjustment. Monte Carlo analyses were also performed for a guidance system that modulates both bank angle and angle of attack with the reference set of uncertainties. An alpha and bank modulation guidance system reduces the 99.87 percentile DELTA V 173 m/s (48%) to 187 m/s for the reference set of uncertainties.

  5. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  6. RANS Simulation (Rotating Reference Frame Model [RRF]) of Single Lab-Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study taking advantage of the symmetry of lab-scaled DOE RM1 geometry, only half of the geometry is models using (Single) Rotating Reference Frame model [RRF]. In this model RANS equations, coupled with k-\\omega turbulence closure model, are solved in the rotating reference frame. The actual geometry of the turbine blade is included and the turbulent boundary layer along the blade span is simulated using wall-function approach. The rotation of the blade is modeled by applying periodic boundary condition to sets of plane of symmetry. This case study simulates the performance and flow field in the near and far wake of the device at the desired operating conditions. The results of these simulations were validated against in-house experimental data. Please see the attached paper.

  7. An Environmental Management Maturity Model of Construction Programs Using the AHP-Entropy Approach.

    PubMed

    Bai, Libiao; Wang, Hailing; Huang, Ning; Du, Qiang; Huang, Youdan

    2018-06-23

    The accelerating process of urbanization in China has led to considerable opportunities for the development of construction projects, however, environmental issues have become an important constraint on the implementation of these projects. To quantitatively describe the environmental management capabilities of such projects, this paper proposes a 2-dimensional Environmental Management Maturity Model of Construction Program (EMMMCP) based on an analysis of existing projects, group management theory and a management maturity model. In this model, a synergetic process was included to compensate for the lack of consideration of synergies in previous studies, and it was involved in the construction of the first dimension, i.e., the environmental management index system. The second dimension, i.e., the maturity level of environment management, was then constructed by redefining the hierarchical characteristics of construction program (CP) environmental management maturity. Additionally, a mathematical solution to this proposed model was derived via the Analytic Hierarchy Process (AHP)-entropy approach. To verify the effectiveness and feasibility of this proposed model, a computational experiment was conducted, and the results show that this approach could not only measure the individual levels of different processes, but also achieve the most important objective of providing a reference for stakeholders when making decisions on the environmental management of construction program, which reflects this model is reasonable for evaluating the level of environmental management maturity in CP. To our knowledge, this paper is the first study to evaluate the environmental management maturity levels of CP, which would fill the gap between project program management and environmental management and provide a reference for relevant management personnel to enhance their environmental management capabilities.

  8. Reweighting anthropometric data using a nearest neighbour approach.

    PubMed

    Kumar, Kannan Anil; Parkinson, Matthew B

    2018-07-01

    When designing products and environments, detailed data on body size and shape are seldom available for the specific user population. One way to mitigate this issue is to reweight available data such that they provide an accurate estimate of the target population of interest. This is done by assigning a statistical weight to each individual in the reference data, increasing or decreasing their influence on statistical models of the whole. This paper presents a new approach to reweighting these data. Instead of stratified sampling, the proposed method uses a clustering algorithm to identify relationships between the detailed and reference populations using their height, mass, and body mass index (BMI). The newly weighted data are shown to provide more accurate estimates than traditional approaches. The improved accuracy that accompanies this method provides designers with an alternative to data synthesis techniques as they seek appropriate data to guide their design practice.Practitioner Summary: Design practice is best guided by data on body size and shape that accurately represents the target user population. This research presents an alternative to data synthesis (e.g. regression or proportionality constants) for adapting data from one population for use in modelling another.

  9. Usage analysis of user files in UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    Presented is a user-oriented analysis of short term file usage in a 4.2 BSD UNIX environment. The key aspect of this analysis is a characterization of users and files, which is a departure from the traditional approach of analyzing file references. Two characterization measures are employed: accesses-per-byte (combining fraction of a file referenced and number of references) and file size. This new approach is shown to distinguish differences in files as well as users, which cam be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise. Over 50% of users referenced files owned by other users, and over 80% of all files were involved in such references. Based on the differences in files and users, suggestions to improve the system performance were also made.

  10. Foundations of modelling of nonequilibrium low-temperature plasmas

    NASA Astrophysics Data System (ADS)

    Alves, L. L.; Bogaerts, A.; Guerra, V.; Turner, M. M.

    2018-02-01

    This work explains the need for plasma models, introduces arguments for choosing the type of model that better fits the purpose of each study, and presents the basics of the most common nonequilibrium low-temperature plasma models and the information available from each one, along with an extensive list of references for complementary in-depth reading. The paper presents the following models, organised according to the level of multi-dimensional description of the plasma: kinetic models, based on either a statistical particle-in-cell/Monte-Carlo approach or the solution to the Boltzmann equation (in the latter case, special focus is given to the description of the electron kinetics); multi-fluid models, based on the solution to the hydrodynamic equations; global (spatially-average) models, based on the solution to the particle and energy rate-balance equations for the main plasma species, usually including a very complete reaction chemistry; mesoscopic models for plasma-surface interaction, adopting either a deterministic approach or a stochastic dynamical Monte-Carlo approach. For each plasma model, the paper puts forward the physics context, introduces the fundamental equations, presents advantages and limitations, also from a numerical perspective, and illustrates its application with some examples. Whenever pertinent, the interconnection between models is also discussed, in view of multi-scale hybrid approaches.

  11. An evolutionary morphological approach for software development cost estimation.

    PubMed

    Araújo, Ricardo de A; Oliveira, Adriano L I; Soares, Sergio; Meira, Silvio

    2012-08-01

    In this work we present an evolutionary morphological approach to solve the software development cost estimation (SDCE) problem. The proposed approach consists of a hybrid artificial neuron based on framework of mathematical morphology (MM) with algebraic foundations in the complete lattice theory (CLT), referred to as dilation-erosion perceptron (DEP). Also, we present an evolutionary learning process, called DEP(MGA), using a modified genetic algorithm (MGA) to design the DEP model, because a drawback arises from the gradient estimation of morphological operators in the classical learning process of the DEP, since they are not differentiable in the usual way. Furthermore, an experimental analysis is conducted with the proposed model using five complex SDCE problems and three well-known performance metrics, demonstrating good performance of the DEP model to solve SDCE problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. A causal examination of the effects of confounding factors on multimetric indices

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.

    2013-01-01

    The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.

  13. Economic Evaluation of Voice Recognition (VR) for the Clinician’s Desktop at the Naval Hospital Roosevelt Roads

    DTIC Science & Technology

    1997-09-01

    first PC-based, very large vocabulary dictation system with a continuous natural language free flow approach to speech recognition. (This system allows...indicating the likelihood that a particular stored HMM reference model is the best match for the input. This approach is called the Baum-Welch...InfoCentral, and Envoy 1.0; and Lotus Development Corp.’s SmartSuite 3, Approach 3.0, and Organizer. 2. IBM At a press conference in New York in June 1997, IBM

  14. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  15. Enhancing Teaching through Constructive Alignment.

    ERIC Educational Resources Information Center

    Biggs, John

    1996-01-01

    An approach to college-level instructional design that incorporates the principles of constructivism, termed "constructive alignment," is described. The process is then illustrated with reference to a professional development unit in educational psychology for teachers, but the model is viewed as generalizable to most units or programs in higher…

  16. Relative Deprivation and the Gender Wage Gap.

    ERIC Educational Resources Information Center

    Jackson, Linda A.

    1989-01-01

    Discusses how gender differences in the value of pay, based on relative deprivation theory, explain women's paradoxical contentment with lower wages. Presents a model of pay satisfaction to integrate value-based and comparative-referent explanations of the relationship between gender and pay satisfaction. Discusses economic approaches to the…

  17. The Learning Cycle and College Science Teaching.

    ERIC Educational Resources Information Center

    Barman, Charles R.; Allard, David W.

    Originally developed in an elementary science program called the Science Curriculum Improvement Study, the learning cycle (LC) teaching approach involves students in an active learning process modeled on four elements of Jean Piaget's theory of cognitive development: physical experience, referring to the biological growth of the central nervous…

  18. Centralization vs. Decentralization: A Location Analysis Approach for Librarians

    ERIC Educational Resources Information Center

    Raffel, Jeffrey; Shishko, Robert

    1972-01-01

    An application of location theory to the question of centralized versus decentralized library facilities for a university, with relevance for special libraries is presented. The analysis provides models for a single library, for two or more libraries, or for decentralized facilities. (6 references) (Author/NH)

  19. Modeling Learning Processes in Lexical CALL.

    ERIC Educational Resources Information Center

    Goodfellow, Robin; Laurillard, Diana

    1994-01-01

    Studies the performance of a novice Spanish student using a Computer-assisted language learning (CALL) system designed for vocabulary enlargement. Results indicate that introspective evidence may be used to validate performance data within a theoretical framework that characterizes the learning approach as "surface" or "deep." (25 references)…

  20. Prescribing an Exercise Program and Motivating Older Adults To Comply.

    ERIC Educational Resources Information Center

    Resnick, Barbara

    2001-01-01

    To help motivate older adults to initiate and adhere to an exercise program, a seven-step approach was developed: education about benefits, screening, goal setting, exposure to exercise, exposure to role models, verbal encouragement from credible sources, and reinforcement and rewards. (Contains 65 references.) (SK)

  1. Conditional parametric models for storm sewer runoff

    NASA Astrophysics Data System (ADS)

    Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.

    2007-05-01

    The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.

  2. Conducting requirements analyses for research using routinely collected health data: a model driven approach.

    PubMed

    de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul

    2012-01-01

    Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.

  3. REDD+ emissions estimation and reporting: dealing with uncertainty

    NASA Astrophysics Data System (ADS)

    Pelletier, Johanne; Martin, Davy; Potvin, Catherine

    2013-09-01

    The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology used to evaluate reference level and emission reductions would strengthen the credibility of the system by promoting accountability and transparency. To secure conservativeness and deal with uncertainty, we consider the need for further research using real data available to developing countries to test the applicability of conservative discounts including the trend uncertainty and other possible options that would allow real incentives and stimulate improvements over time. Finally, we argue that REDD+ result-based actions assessed on the basis of a dashboard of performance indicators, not only in ‘tonnes CO2 equ. per year’ might provide a more holistic approach, at least until better accuracy and certainty of forest carbon stocks emission and removal estimates to support a REDD+ policy can be reached.

  4. Extended evaluation on the ES-D3 cell differentiation assay combined with the BeWo transport model, to predict relative developmental toxicity of triazole compounds.

    PubMed

    Li, Hequn; Flick, Burkhard; Rietjens, Ivonne M C M; Louisse, Jochem; Schneider, Steffen; van Ravenzwaay, Bennard

    2016-05-01

    The mouse embryonic stem D3 (ES-D3) cell differentiation assay is based on the morphometric measurement of cardiomyocyte differentiation and is a promising tool to detect developmental toxicity of compounds. The BeWo transport model, consisting of BeWo b30 cells grown on transwell inserts and mimicking the placental barrier, is useful to determine relative placental transport velocities of compounds. We have previously demonstrated the usefulness of the ES-D3 cell differentiation assay in combination with the in vitro BeWo transport model to predict the relative in vivo developmental toxicity potencies of a set of reference azole compounds. To further evaluate this combined in vitro toxicokinetic and toxicodynamic approach, we combined ES-D3 cell differentiation data of six novel triazoles with relative transport rates obtained from the BeWo model and compared the obtained ranking to the developmental toxicity ranking as derived from in vivo data. The data show that the combined in vitro approach provided a correct prediction for in vivo developmental toxicity, whereas the ES-D3 cell differentiation assay as stand-alone did not. In conclusion, we have validated the combined in vitro approach for developmental toxicity, which we have previously developed with a set of reference azoles, for a set of six novel triazoles. We suggest that this combined model, which takes both toxicodynamic and toxicokinetic aspects into account, should be further validated for other chemical classes of developmental toxicants.

  5. The effect of meteorological data on atmospheric pressure loading corrections in VLBI data analysis

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Glaser, Susanne; Karbon, Maria; Soja, Benedikt; Nilsson, Tobias; Lu, Cuixian; Anderson, James; Liu, Li; Andres Mora-Diaz, Julian; Raposo-Pulido, Virginia; Xu, Minghui; Heinkelmann, Robert; Schuh, Harald

    2015-04-01

    Earth's crustal deformation is a manifestation of numerous geophysical processes, which entail the atmosphere and ocean general circulation and tidal attraction, climate change, and the hydrological circle. The present study deals with the elastic deformations induced by atmospheric pressure variations. At geodetic sites, APL (Atmospheric Pressure Loading) results in displacements covering a wide range of temporal scales which is undesirable when rigorous geodetic/geophysical analysis is intended. Hence, it is of paramount importance that the APL signal are removed at the observation level in the space geodetic data analysis. In this study, elastic non-tidal components of loading displacements were calculated in the local topocentric frame for all VLBI (Very Long Baseline Interferometry) stations with respect to the center-of-figure of the solid Earth surface and the center-of-mass of the total Earth system. The response of the Earth to the load variation at the surface was computed by convolving Farrell Green's function with the homogenized in situ surface pressure observations (in the time span 1979-2014) after the subtraction of the reference pressure and the S1, S2 and S3 thermal tidal signals. The reference pressure was calculated through a hypsometric adjustment of the absolute pressure level determined from World Meteorological Organization stations in the vicinity of each VLBI observatory. The tidal contribution was calculated following the 2010 International Earth Rotation and Reference Systems Service conventions. Afterwards, this approach was implemented into the VLBI software VieVS@GFZ and the entirety of available VLBI sessions was analyzed. We rationalize our new approach on the basis that the potential error budget is substantially reduced, since several common errors are not applicable in our approach, e.g. those due to the finite resolution of NWM (Numerical Weather Models), the accuracy of the orography model necessary for adjusting the former as well as the inconsistencies between them, and the interpolation scheme which yields the elastic deformations. Differences of the resulting TRF (Terrestrial Reference Frame) determinations and other products derived from VLBI analysis between the approach followed here and the one employing NWM's data for obtaining the input pressure fields, are illustrated. The providers of the atmospheric pressure loading models employed for our comparisons are GSFC/NASA, the University of Luxembourg, the University of Strasbourg, the Technical University of Vienna and GeoForschungsZentrum of Potsdam.

  6. Identification of candidate reference chemicals for in vitro steroidogenesis assays.

    PubMed

    Pinto, Caroline Lucia; Markey, Kristan; Dix, David; Browne, Patience

    2018-03-01

    The Endocrine Disruptor Screening Program (EDSP) is transitioning from traditional testing methods to integrating ToxCast/Tox21 in vitro high-throughput screening assays for identifying chemicals with endocrine bioactivity. The ToxCast high-throughput H295R steroidogenesis assay may potentially replace the low-throughput assays currently used in the EDSP Tier 1 battery to detect chemicals that alter the synthesis of androgens and estrogens. Herein, we describe an approach for identifying in vitro candidate reference chemicals that affect the production of androgens and estrogens in models of steroidogenesis. Candidate reference chemicals were identified from a review of H295R and gonad-derived in vitro assays used in methods validation and published in the scientific literature. A total of 29 chemicals affecting androgen and estrogen levels satisfied all criteria for positive reference chemicals, while an additional set of 21 and 15 chemicals partially fulfilled criteria for positive reference chemicals for androgens and estrogens, respectively. The identified chemicals included pesticides, pharmaceuticals, industrial and naturally-occurring chemicals with the capability to increase or decrease the levels of the sex hormones in vitro. Additionally, 14 and 15 compounds were identified as potential negative reference chemicals for effects on androgens and estrogens, respectively. These candidate reference chemicals will be informative for performance-based validation of in vitro steroidogenesis models. Copyright © 2017. Published by Elsevier Ltd.

  7. Science-based approach for credible accounting of mitigation in managed forests.

    PubMed

    Grassi, Giacomo; Pilli, Roberto; House, Jo; Federici, Sandro; Kurz, Werner A

    2018-05-17

    The credibility and effectiveness of country climate targets under the Paris Agreement requires that, in all greenhouse gas (GHG) sectors, the accounted mitigation outcomes reflect genuine deviations from the type and magnitude of activities generating emissions in the base year or baseline. This is challenging for the forestry sector, as the future net emissions can change irrespective of actual management activities, because of age-related stand dynamics resulting from past management and natural disturbances. The solution implemented under the Kyoto Protocol (2013-2020) was accounting mitigation as deviation from a projected (forward-looking) "forest reference level", which considered the age-related dynamics but also allowed including the assumed future implementation of approved policies. This caused controversies, as unverifiable counterfactual scenarios with inflated future harvest could lead to credits where no change in management has actually occurred, or conversely, failing to reflect in the accounts a policy-driven increase in net emissions. Instead, here we describe an approach to set reference levels based on the projected continuation of documented historical forest management practice, i.e. reflecting age-related dynamics but not the future impact of policies. We illustrate a possible method to implement this approach at the level of the European Union (EU) using the Carbon Budget Model. Using EU country data, we show that forest sinks between 2013 and 2016 were greater than that assumed in the 2013-2020 EU reference level under the Kyoto Protocol, which would lead to credits of 110-120 Mt CO 2 /year (capped at 70-80 Mt CO 2 /year, equivalent to 1.3% of 1990 EU total emissions). By modelling the continuation of management practice documented historically (2000-2009), we show that these credits are mostly due to the inclusion in the reference levels of policy-assumed harvest increases that never materialized. With our proposed approach, harvest is expected to increase (12% in 2030 at EU-level, relative to 2000-2009), but more slowly than in current forest reference levels, and only because of age-related dynamics, i.e. increased growing stocks in maturing forests. Our science-based approach, compatible with the EU post-2020 climate legislation, helps to ensure that only genuine deviations from the continuation of historically documented forest management practices are accounted toward climate targets, therefore enhancing the consistency and comparability across GHG sectors. It provides flexibility for countries to increase harvest in future reference levels when justified by age-related dynamics. It offers a policy-neutral solution to the polarized debate on forest accounting (especially on bioenergy) and supports the credibility of forest sector mitigation under the Paris Agreement.

  8. A model for dynamic allocation of human attention among multiple tasks

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Tulga, M. K.

    1978-01-01

    The problem of multi-task attention allocation with special reference to aircraft piloting is discussed with the experimental paradigm used to characterize this situation and the experimental results obtained in the first phase of the research. A qualitative description of an approach to mathematical modeling, and some results obtained with it are also presented to indicate what aspects of the model are most promising. Two appendices are given which (1) discuss the model in relation to graph theory and optimization and (2) specify the optimization algorithm of the model.

  9. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  10. An Approach To Using All Location Tagged Numerical Data Sets As Continuous Fields With User-Assigned Continuity As A Basis For User-Driven Data Assimilation

    NASA Astrophysics Data System (ADS)

    Vernon, F.; Arrott, M.; Orcutt, J. A.; Mueller, C.; Case, J.; De Wardener, G.; Kerfoot, J.; Schofield, O.

    2013-12-01

    Any approach sophisticated enough to handle a variety of data sources and scale, yet easy enough to promote wide use and mainstream adoption is required to address the following mappings: - From the authored domain of observation to the requested domain of interest; - From the authored spatiotemporal resolution to the requested resolution; and - From the representation of data placed on wide variety of discrete mesh types to the use of that data as a continuos field with a selectable continuity. The Open Geospatial Consortium's (OGC) Reference Model[1] with its direct association with the ISO 19000 series standards provides a comprehensive foundation to represent all data on any type of mesh structure, aka "Discrete Coverages". The Reference Model also provides the specification for the core operations required to utilize any Discrete Coverage. The FEniCS Project[2] provides a comprehensive model for how to represent the Basis Functions on mesh structures as "Degrees of Freedom" to present discrete data as continuous fields with variable continuity. In this talk, we will present the research and development the OOI Cyberinfrastructure Project is pursuing to integrate these approaches into a comprehensive Application Programming Interface (API) to author, acquire and operate on the broad range of data formulation from time series, trajectories and tables through to time variant finite difference grids and finite element meshes.

  11. A method for using real world data in breast cancer modeling.

    PubMed

    Pobiruchin, Monika; Bochum, Sylvia; Martens, Uwe M; Kieser, Meinhard; Schramm, Wendelin

    2016-04-01

    Today, hospitals and other health care-related institutions are accumulating a growing bulk of real world clinical data. Such data offer new possibilities for the generation of disease models for the health economic evaluation. In this article, we propose a new approach to leverage cancer registry data for the development of Markov models. Records of breast cancer patients from a clinical cancer registry were used to construct a real world data driven disease model. We describe a model generation process which maps database structures to disease state definitions based on medical expert knowledge. Software was programmed in Java to automatically derive a model structure and transition probabilities. We illustrate our method with the reconstruction of a published breast cancer reference model derived primarily from clinical study data. In doing so, we exported longitudinal patient data from a clinical cancer registry covering eight years. The patient cohort (n=892) comprised HER2-positive and HER2-negative women treated with or without Trastuzumab. The models generated with this method for the respective patient cohorts were comparable to the reference model in their structure and treatment effects. However, our computed disease models reflect a more detailed picture of the transition probabilities, especially for disease free survival and recurrence. Our work presents an approach to extract Markov models semi-automatically using real world data from a clinical cancer registry. Health care decision makers may benefit from more realistic disease models to improve health care-related planning and actions based on their own data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Automated verbal credibility assessment of intentions: The model statement technique and predictive modeling

    PubMed Central

    van der Toolen, Yaloe; Vrij, Aldert; Arntz, Arnoud; Verschuere, Bruno

    2018-01-01

    Summary Recently, verbal credibility assessment has been extended to the detection of deceptive intentions, the use of a model statement, and predictive modeling. The current investigation combines these 3 elements to detect deceptive intentions on a large scale. Participants read a model statement and wrote a truthful or deceptive statement about their planned weekend activities (Experiment 1). With the use of linguistic features for machine learning, more than 80% of the participants were classified correctly. Exploratory analyses suggested that liars included more person and location references than truth‐tellers. Experiment 2 examined whether these findings replicated on independent‐sample data. The classification accuracies remained well above chance level but dropped to 63%. Experiment 2 corroborated the finding that liars' statements are richer in location and person references than truth‐tellers' statements. Together, these findings suggest that liars may over‐prepare their statements. Predictive modeling shows promise as an automated veracity assessment approach but needs validation on independent data. PMID:29861544

  13. Space shuttle propulsion estimation development verification

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.

  14. Contribution of the International Reference Ionosphere to the progress of the ionospheric representation

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter

    2017-04-01

    The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.

  15. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  16. ALT-114 and ALT-118 Alternative Approaches to NIST ...

    EPA Pesticide Factsheets

    In 2016, US EPA approved two separate alternatives (ALT 114 and ALT 118) for the preparation and certification of Hydrogen Chloride (HCl) and Mercury (Hg) cylinder reference gas standards that can serve as EPA Protocol gases where EPA Protocol are required, but unavailable. The alternatives were necessary due to the unavailability of NIST reference materials (SRM, NTRM, CRM or RGM) or VSL reference materials (VSL PRM or VSL CRM), reference materials identified in EPA’s Green Book as necessary to establish the traceability of EPA protocol gases. ALT 114 and ALT 118 provides a pathway for gas vendors to prepare and certify traceable gas cylinder standards for use in certifying Hg and HCl CEMS. In this presentation, EPA will describe the mechanics and requirements of the performance-based approach, provide an update on the availability of these gas standards and also discuss the potential for producing and certifying gas standards for other compounds using this approach. This presentation discusses the importance of NIST-traceable reference gases relative to regulatory source compliance emissions monitoring. Specifically this presentation discusses 2 new approaches for making necessary reference gases available in the absence of NIST reference materials. Moreover, these approaches provide an alternative approach to rapidly make available new reference gases for additional HAPS regulatory compliance emissions measurement and monitoring.

  17. Enterprise Reference Library

    NASA Technical Reports Server (NTRS)

    Bickham, Grandin; Saile, Lynn; Havelka, Jacque; Fitts, Mary

    2011-01-01

    Introduction: Johnson Space Center (JSC) offers two extensive libraries that contain journals, research literature and electronic resources. Searching capabilities are available to those individuals residing onsite or through a librarian s search. Many individuals have rich collections of references, but no mechanisms to share reference libraries across researchers, projects, or directorates exist. Likewise, information regarding which references are provided to which individuals is not available, resulting in duplicate requests, redundant labor costs and associated copying fees. In addition, this tends to limit collaboration between colleagues and promotes the establishment of individual, unshared silos of information The Integrated Medical Model (IMM) team has utilized a centralized reference management tool during the development, test, and operational phases of this project. The Enterprise Reference Library project expands the capabilities developed for IMM to address the above issues and enhance collaboration across JSC. Method: After significant market analysis for a multi-user reference management tool, no available commercial tool was found to meet this need, so a software program was built around a commercial tool, Reference Manager 12 by The Thomson Corporation. A use case approach guided the requirements development phase. The premise of the design is that individuals use their own reference management software and export to SharePoint when their library is incorporated into the Enterprise Reference Library. This results in a searchable user-specific library application. An accompanying share folder will warehouse the electronic full-text articles, which allows the global user community to access full -text articles. Discussion: An enterprise reference library solution can provide a multidisciplinary collection of full text articles. This approach improves efficiency in obtaining and storing reference material while greatly reducing labor, purchasing and duplication costs. Most importantly, increasing collaboration across research groups provides unprecedented access to information relevant to NASA s mission. Conclusion: This project is an expansion and cost-effective leveraging of the existing JSC centralized library. Adding key word and author search capabilities and an alert function for notifications about new articles, based on users profiles, represent examples of future enhancements.

  18. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  19. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  20. Application of a modeling approach to designate soil and soil organic carbon loss to wind erosion on long-term monitoring sites (BDF) in Northern Germany

    NASA Astrophysics Data System (ADS)

    Nerger, Rainer; Funk, Roger; Cordsen, Eckhard; Fohrer, Nicola

    2017-04-01

    Soil organic carbon (SOC) loss is a serious problem in maize monoculture areas of Northern Germany. Sites of the soil monitoring network (SMN) "Boden-Dauerbeobachtung" show long-term soil and SOC losses, which cannot be explained by conventional SOC balances nor by other non-Aeolian causes. Using a process-based model, the main objective was to determine whether these losses can be explained by wind erosion. In the long-term context of 10 years, wind erosion was not measured directly but often observed. A suitable estimation approach linked high-quality soil/farming monitoring data with wind erosion modeling results. The model SWEEP, validated for German sandy soils, was selected using 10-minute wind speed data. Two similar local SMN study sites were compared, however, site A was characterized by high SOC loss and often affected by wind erosion, while the reference site B was not. At site A soil mass and SOC stock decreased by 49.4 and 2.44 kg m-2 from 1999 to 2009. Using SWEEP, a total soil loss of 48.9 kg m-2 resulted for 16 erosion events (max. single event 12.6 kg m-2). A share of 78% was transported by suspension with a SOC enrichment ratio (ER) of 2.96 (saltation ER 0.98), comparable to the literature. At the reference site measured and modeled topsoil losses were minimal. The good agreement between monitoring and modeling results suggested that wind erosion caused significant long-term soil and SOC losses. The approach uses results of prior studies and is applicable to similar well-studied sites without other noteworthy SOC losses.

  1. Coping with Trial-to-Trial Variability of Event Related Signals: A Bayesian Inference Approach

    NASA Technical Reports Server (NTRS)

    Ding, Mingzhou; Chen, Youghong; Knuth, Kevin H.; Bressler, Steven L.; Schroeder, Charles E.

    2005-01-01

    In electro-neurophysiology, single-trial brain responses to a sensory stimulus or a motor act are commonly assumed to result from the linear superposition of a stereotypic event-related signal (e.g. the event-related potential or ERP) that is invariant across trials and some ongoing brain activity often referred to as noise. To extract the signal, one performs an ensemble average of the brain responses over many identical trials to attenuate the noise. To date, h s simple signal-plus-noise (SPN) model has been the dominant approach in cognitive neuroscience. Mounting empirical evidence has shown that the assumptions underlying this model may be overly simplistic. More realistic models have been proposed that account for the trial-to-trial variability of the event-related signal as well as the possibility of multiple differentially varying components within a given ERP waveform. The variable-signal-plus-noise (VSPN) model, which has been demonstrated to provide the foundation for separation and characterization of multiple differentially varying components, has the potential to provide a rich source of information for questions related to neural functions that complement the SPN model. Thus, being able to estimate the amplitude and latency of each ERP component on a trial-by-trial basis provides a critical link between the perceived benefits of the VSPN model and its many concrete applications. In this paper we describe a Bayesian approach to deal with this issue and the resulting strategy is referred to as the differentially Variable Component Analysis (dVCA). We compare the performance of dVCA on simulated data with Independent Component Analysis (ICA) and analyze neurobiological recordings from monkeys performing cognitive tasks.

  2. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  3. Adopting public health approaches to communication disability: challenges for the education of speech-language pathologists.

    PubMed

    Wylie, Karen; McAllister, Lindy; Davidson, Bronwyn; Marshall, Julie; Law, James

    2014-01-01

    Public health approaches to communication disability challenge the profession of speech-language pathology (SLP) to reconsider both frames of reference for practice and models of education. This paper reviews the impetus for public health approaches to communication disability and considers how public health is, and could be, incorporated into SLP education, both now and in the future. The paper describes tensions between clinical services, which have become increasingly specialized, and public health approaches that offer a broader view of communication disability and communication disability prevention. It presents a discussion of these tensions and asserts that public health approaches to communication are themselves a specialist field, requiring specific knowledge and skills. The authors suggest the use of the term 'communication disability public health' to refer to this type of work and offer a preliminary definition in order to advance discussion. Examples from three countries are provided of how some SLP degree programmes are integrating public health into the SLP curriculum. Alternative models of training for communication disability public health that may be relevant in the future in different contexts and countries are presented, prompting the SLP profession to consider whether communication disability public health is a field of practice for speech-language pathologists or whether it has broader workforce implications. The paper concludes with some suggestions for the future which may advance thinking, research and practice in communication disability public health. © 2015 S. Karger AG, Basel.

  4. Forensic investigation of plutonium metal: a case study of CRM 126

    DOE PAGES

    Byerly, Benjamin L.; Stanley, Floyd; Spencer, Khal; ...

    2016-11-01

    In our study, a certified plutonium metal reference material (CRM 126) with a known production history is examined using analytical methods that are commonly employed in nuclear forensics for provenancing and attribution. Moreover, the measured plutonium isotopic composition and actinide assay are consistent with values reported on the reference material certificate. Model ages from U/Pu and Am/Pu chronometers agree with the documented production timeline. Finally, these results confirm the utility of these analytical methods and highlight the importance of a holistic approach for forensic study of unknown materials.

  5. Crisis management: an extended reference framework for decision makers.

    PubMed

    Carone, Alessandro; Iorio, Luigi Di

    2013-01-01

    The paper discusses a reference framework for capabilities supporting effective crisis management. This framework has been developed by joining experiences in the field and knowledge of organisational models for crisis management, and executives' empowerment, coaching and behavioural analysis. The paper is aimed at offering further insight to executives on critical success factors and means for managing crisis situations by extending the scope of analysis to human behaviour, to emotions and fears and their correlation with decision making. It is further intended to help familiarise them and to facilitate approaching a path towards emotional awareness.

  6. A New Method with General Diagnostic Utility for the Calculation of Immunoglobulin G Avidity

    PubMed Central

    Korhonen, Maria H.; Brunstein, John; Haario, Heikki; Katnikov, Alexei; Rescaldani, Roberto; Hedman, Klaus

    1999-01-01

    The reference method for immunoglobulin G (IgG) avidity determination includes reagent-consuming serum titration. Aiming at better IgG avidity diagnostics, we applied a logistic model for the reproduction of antibody titration curves. This method was tested with well-characterized serum panels for cytomegalovirus, Epstein-Barr virus, rubella virus, parvovirus B19, and Toxoplasma gondii. This approach for IgG avidity calculation is generally applicable and attains the diagnostic performance of the reference method while being less laborious and twice as cost-effective. PMID:10473525

  7. Book Selection, Collection Development, and Bounded Rationality.

    ERIC Educational Resources Information Center

    Schwartz, Charles A.

    1989-01-01

    Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…

  8. Employee Assistance Programs: Effective Tools for Counseling Employees.

    ERIC Educational Resources Information Center

    Kraft, Ed

    1991-01-01

    College employee assistance program designs demonstrate the varied needs of a workforce. Whatever the model, the helping approach remains to (1) identify problem employees through performance-related issues; (2) refer them to the assistance program for further intervention; and (3) follow up with employee and supervisor to ensure a successful…

  9. Design, Development, and Validation of Learning Objects

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok

    2006-01-01

    A learning object is a small, stand-alone, mediated content resource that can be reused in multiple instructional contexts. In this article, we describe our approach to design, develop, and validate Shareable Content Object Reference Model (SCORM) compliant learning objects for undergraduate computer science education. We discuss the advantages of…

  10. Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement.

    PubMed

    Kainz, Hans; Lloyd, David G; Walsh, Henry P J; Carty, Christopher P

    2016-05-01

    In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5°, differences in pelvis angles at pre-defined gait events below 10°). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30°, differences in pelvis angles at pre-defined gait events up to 45°). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Human-Robot Interaction: A Survey

    DTIC Science & Technology

    2007-01-01

    breaks with the monolithic sense- plan -act loop of a centralized system, and instead uses distributed sense-response loops to generate appropriate...one of the first modern robots, cour- tesy of SRI International, Menlo Park, CA [279]; Kismet — an anthropomorphic robot with exaggerated emotion...linguis- tics. A common autonomy approach is sometimes referred to as the sense- plan -act model of decision-making [196]. This model has been a target

  12. On the consistency of tomographically imaged lower mantle slabs

    NASA Astrophysics Data System (ADS)

    Shephard, Grace E.; Matthews, Kara J.; Hosseini, Kasra; Domeier, Mathew

    2017-04-01

    Over the last few decades numerous seismic tomography models have been published, each constructed with choices of data input, parameterization and reference model. The broader geoscience community is increasingly utilizing these models, or a selection thereof, to interpret Earth's mantle structure and processes. It follows that seismically identified remnants of subducted slabs have been used to validate, test or refine relative plate motions, absolute plate reference frames, and mantle sinking rates. With an increasing number of models to include, or exclude, the question arises - how robust is a given positive seismic anomaly, inferred to be a slab, across a given suite of tomography models? Here we generate a series of "vote maps" for the lower mantle by comparing 14 seismic tomography models, including 7 s-wave and 7 p-wave. Considerations include the retention or removal of the mean, the use of a consistent or variable reference model, the statistical value which defines the slab "contour", and the effect of depth interpolation. Preliminary results will be presented that address the depth, location and degree of agreement between seismic tomography models, both for the 14 combined, and between the p-waves and s-waves. The analysis also permits a broader discussion of slab volumes and subduction flux. And whilst the location and geometry of slabs, matches some the documented regions of long-lived subduction, other features do not, illustrating the importance of a robust approach to slab identification.

  13. Multi-Component Molecular-Level Body Composition Reference Methods: Evolving Concepts and Future Directions

    PubMed Central

    Heymsfield, Steven B.; Ebbeling, Cara B.; Zheng, Jolene; Pietrobelli, Angelo; Strauss, Boyd J.; Silva, Analiza M.; Ludwig, David S.

    2015-01-01

    Excess adiposity is the main phenotypic feature that defines human obesity and that plays a pathophysiological role in most chronic diseases. Measuring the amount of fat mass present is thus a central aspect of studying obesity at the individual and population levels. Nevertheless, a consensus is lacking among investigators on a single accepted “reference” approach for quantifying fat mass in vivo. While the research community generally relies on the multicomponent body-volume class of “reference” models for quantifying fat mass, no definable guide discerns among different applied equations for partitioning the four (fat, water, protein, and mineral mass) or more quantified components, standardizes “adjustment” or measurement system approaches for model-required labeled water dilution volumes and bone mineral mass estimates, or firmly establishes the body temperature at which model physical properties are assumed. The resulting differing reference strategies for quantifying body composition in vivo leads to small but under some circumstances important differences in the amount of measured body fat. Recent technological advances highlight opportunities to expand model applications to new subject groups and measured components such as total body protein. The current report reviews the historical evolution of multicomponent body volume-based methods in the context of prevailing uncertainties and future potential. PMID:25645009

  14. A taxonomy for the evolution of human settlements on the moon and Mars

    NASA Technical Reports Server (NTRS)

    Roberts, Barney B.; Mandell, Humboldt C.

    1991-01-01

    A proposed structure is described for partnerships with shared interests and investments to develop the technology and approach for evolutionary surface systems for the moon and Mars. Five models are presented for cooperation with specific references to the technical evolutionary path of the surface systems. The models encompass the standard customer/provider relationship, a concept for exclusive government use, a joint venture with a government-sponsored non-SEI market, a technology joint-development approach, and a redundancy model to insure competitive pricing. The models emphasize the nonaerospace components of the settlement technologies and the decentralized nature of surface systems that make the project suitable for private industrial development by several companies. It is concluded that the taxonomy be considered when examining collaborative opportunities for lunar and Martian settlement.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esfahani, M. Nasr; Yilmaz, M.; Sonne, M. R.

    The trend towards nanomechanical resonator sensors with increasing sensitivity raises the need to address challenges encountered in the modeling of their mechanical behavior. Selecting the best approach in mechanical response modeling amongst the various potential computational solid mechanics methods is subject to controversy. A guideline for the selection of the appropriate approach for a specific set of geometry and mechanical properties is needed. In this study, geometrical limitations in frequency response modeling of flexural nanomechanical resonators are investigated. Deviation of Euler and Timoshenko beam theories from numerical techniques including finite element modeling and Surface Cauchy-Born technique are studied. The resultsmore » provide a limit beyond which surface energy contribution dominates the mechanical behavior. Using the Surface Cauchy-Born technique as the reference, a maximum error on the order of 50 % is reported for high-aspect ratio resonators.« less

  16. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  17. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  18. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  19. Adaptive Locally Optimum Processing for Interference Suppression from Communication and Undersea Surveillance Signals

    DTIC Science & Technology

    1994-07-01

    1993. "Analysis of the 1730-1732. Track - Before - Detect Approach to Target Detection using Pixel Statistics", to appear in IEEE Transactions Scholz, J...large surveillance arrays. One approach to combining energy in different spatial cells is track - before - detect . References to examples appear in the next... track - before - detect problem. The results obtained are not expected to depend strongly on model details. In particular, the structure of the tracking

  20. Towards an integrated approach to natural hazards risk assessment using GIS: with reference to bushfires.

    PubMed

    Chen, Keping; Blong, Russell; Jacobson, Carol

    2003-04-01

    This paper develops a GIS-based integrated approach to risk assessment in natural hazards, with reference to bushfires. The challenges for undertaking this approach have three components: data integration, risk assessment tasks, and risk decision-making. First, data integration in GIS is a fundamental step for subsequent risk assessment tasks and risk decision-making. A series of spatial data integration issues within GIS such as geographical scales and data models are addressed. Particularly, the integration of both physical environmental data and socioeconomic data is examined with an example linking remotely sensed data and areal census data in GIS. Second, specific risk assessment tasks, such as hazard behavior simulation and vulnerability assessment, should be undertaken in order to understand complex hazard risks and provide support for risk decision-making. For risk assessment tasks involving heterogeneous data sources, the selection of spatial analysis units is important. Third, risk decision-making concerns spatial preferences and/or patterns, and a multicriteria evaluation (MCE)-GIS typology for risk decision-making is presented that incorporates three perspectives: spatial data types, data models, and methods development. Both conventional MCE methods and artificial intelligence-based methods with GIS are identified to facilitate spatial risk decision-making in a rational and interpretable way. Finally, the paper concludes that the integrated approach can be used to assist risk management of natural hazards, in theory and in practice.

  1. Comments on “A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man

    2015-06-01

    Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less

  2. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    PubMed

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom activity ratios 9.7:1, 4:1, and 2:1, respectively. For all phantoms and at all contrast ratios, the average RMS error was found to be significantly lower for the proposed automated method compared to the manual analysis of the phantom scans. The uptake measurements produced by the automated method showed high correlation with the independent reference standard (R 2 ≥ 0.9987). In addition, the average computing time for the automated method was 30.6 s and was found to be significantly lower (P ≪ 0.001) compared to manual analysis (mean: 247.8 s). The proposed automated approach was found to have less error when measured against the independent reference than the manual approach. It can be easily adapted to other phantoms with spherical inserts. In addition, it eliminates inter- and intraoperator variability in PET phantom analysis and is significantly more time efficient, and therefore, represents a promising approach to facilitate and simplify PET standardization and harmonization efforts. © 2017 American Association of Physicists in Medicine.

  3. Integrating Intracellular Dynamics Using CompuCell3D and Bionetsolver: Applications to Multiscale Modelling of Cancer Cell Growth and Invasion

    PubMed Central

    Andasari, Vivi; Roper, Ryan T.; Swat, Maciej H.; Chaplain, Mark A. J.

    2012-01-01

    In this paper we present a multiscale, individual-based simulation environment that integrates CompuCell3D for lattice-based modelling on the cellular level and Bionetsolver for intracellular modelling. CompuCell3D or CC3D provides an implementation of the lattice-based Cellular Potts Model or CPM (also known as the Glazier-Graner-Hogeweg or GGH model) and a Monte Carlo method based on the metropolis algorithm for system evolution. The integration of CC3D for cellular systems with Bionetsolver for subcellular systems enables us to develop a multiscale mathematical model and to study the evolution of cell behaviour due to the dynamics inside of the cells, capturing aspects of cell behaviour and interaction that is not possible using continuum approaches. We then apply this multiscale modelling technique to a model of cancer growth and invasion, based on a previously published model of Ramis-Conde et al. (2008) where individual cell behaviour is driven by a molecular network describing the dynamics of E-cadherin and -catenin. In this model, which we refer to as the centre-based model, an alternative individual-based modelling technique was used, namely, a lattice-free approach. In many respects, the GGH or CPM methodology and the approach of the centre-based model have the same overall goal, that is to mimic behaviours and interactions of biological cells. Although the mathematical foundations and computational implementations of the two approaches are very different, the results of the presented simulations are compatible with each other, suggesting that by using individual-based approaches we can formulate a natural way of describing complex multi-cell, multiscale models. The ability to easily reproduce results of one modelling approach using an alternative approach is also essential from a model cross-validation standpoint and also helps to identify any modelling artefacts specific to a given computational approach. PMID:22461894

  4. Strategic parameter-driven routing models for multidestination traffic in telecommunication networks.

    PubMed

    Lee, Y; Tien, J M

    2001-01-01

    We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.

  5. Mixture models for detecting differentially expressed genes in microarrays.

    PubMed

    Jones, Liat Ben-Tovim; Bean, Richard; McLachlan, Geoffrey J; Zhu, Justin Xi

    2006-10-01

    An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

  6. Adaptive fuzzy logic controller with direct action type structures for InnoSAT attitude control system

    NASA Astrophysics Data System (ADS)

    Bakri, F. A.; Mashor, M. Y.; Sharun, S. M.; Bibi Sarpinah, S. N.; Abu Bakar, Z.

    2016-10-01

    This study proposes an adaptive fuzzy controller for attitude control system (ACS) of Innovative Satellite (InnoSAT) based on direct action type structure. In order to study new methods used in satellite attitude control, this paper presents three structures of controllers: Fuzzy PI, Fuzzy PD and conventional Fuzzy PID. The objective of this work is to compare the time response and tracking performance among the three different structures of controllers. The parameters of controller were tuned on-line by adjustment mechanism, which was an approach similar to a PID error that could minimize errors between actual and model reference output. This paper also presents a Model References Adaptive Control (MRAC) as a control scheme to control time varying systems where the performance specifications were given in terms of the reference model. All the controllers were tested using InnoSAT system under some operating conditions such as disturbance, varying gain, measurement noise and time delay. In conclusion, among all considered DA-type structures, AFPID controller was observed as the best structure since it outperformed other controllers in most conditions.

  7. Safety assessment of plant varieties using transcriptomics profiling and a one-class classifier.

    PubMed

    van Dijk, Jeroen P; de Mello, Carla Souza; Voorhuijzen, Marleen M; Hutten, Ronald C B; Arisi, Ana Carolina Maisonnave; Jansen, Jeroen J; Buydens, Lutgarde M C; van der Voet, Hilko; Kok, Esther J

    2014-10-01

    An important part of the current hazard identification of novel plant varieties is comparative targeted analysis of the novel and reference varieties. Comparative analysis will become much more informative with unbiased analytical approaches, e.g. omics profiling. Data analysis estimating the similarity of new varieties to a reference baseline class of known safe varieties would subsequently greatly facilitate hazard identification. Further biological and eventually toxicological analysis would then only be necessary for varieties that fall outside this reference class. For this purpose, a one-class classifier tool was explored to assess and classify transcriptome profiles of potato (Solanum tuberosum) varieties in a model study. Profiles of six different varieties, two locations of growth, two year of harvest and including biological and technical replication were used to build the model. Two scenarios were applied representing evaluation of a 'different' variety and a 'similar' variety. Within the model higher class distances resulted for the 'different' test set compared with the 'similar' test set. The present study may contribute to a more global hazard identification of novel plant varieties. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. State observer-based sliding mode control for semi-active hydro-pneumatic suspension

    NASA Astrophysics Data System (ADS)

    Ren, Hongbin; Chen, Sizhong; Zhao, Yuzhuang; Liu, Gang; Yang, Lin

    2016-02-01

    This paper proposes an improved virtual reference model for semi-active suspension to coordinate the vehicle ride comfort and handling stability. The reference model combines the virtues of sky-hook with ground-hook control logic, and the hybrid coefficient is tuned according to the longitudinal and lateral acceleration so as to improve the vehicle stability especially in high-speed condition. Suspension state observer based on unscented Kalman filter is designed. A sliding mode controller (SMC) is developed to track the states of the reference model. The stability of the SMC strategy is proven by means of Lyapunov function taking into account the nonlinear damper characteristics and sprung mass variation of the vehicle. Finally, the performance of the controller is demonstrated under three typical working conditions: the random road excitation, speed bump road and sharp acceleration and braking. The simulation results indicated that, compared with the traditional passive suspension, the proposed control algorithm can offer a better coordination between vehicle ride comfort and handling stability. This approach provides a viable alternative to costlier active suspension control systems for commercial vehicles.

  9. Reliable Detection of Herpes Simplex Virus Sequence Variation by High-Throughput Resequencing.

    PubMed

    Morse, Alison M; Calabro, Kaitlyn R; Fear, Justin M; Bloom, David C; McIntyre, Lauren M

    2017-08-16

    High-throughput sequencing (HTS) has resulted in data for a number of herpes simplex virus (HSV) laboratory strains and clinical isolates. The knowledge of these sequences has been critical for investigating viral pathogenicity. However, the assembly of complete herpesviral genomes, including HSV, is complicated due to the existence of large repeat regions and arrays of smaller reiterated sequences that are commonly found in these genomes. In addition, the inherent genetic variation in populations of isolates for viruses and other microorganisms presents an additional challenge to many existing HTS sequence assembly pipelines. Here, we evaluate two approaches for the identification of genetic variants in HSV1 strains using Illumina short read sequencing data. The first, a reference-based approach, identifies variants from reads aligned to a reference sequence and the second, a de novo assembly approach, identifies variants from reads aligned to de novo assembled consensus sequences. Of critical importance for both approaches is the reduction in the number of low complexity regions through the construction of a non-redundant reference genome. We compared variants identified in the two methods. Our results indicate that approximately 85% of variants are identified regardless of the approach. The reference-based approach to variant discovery captures an additional 15% representing variants divergent from the HSV1 reference possibly due to viral passage. Reference-based approaches are significantly less labor-intensive and identify variants across the genome where de novo assembly-based approaches are limited to regions where contigs have been successfully assembled. In addition, regions of poor quality assembly can lead to false variant identification in de novo consensus sequences. For viruses with a well-assembled reference genome, a reference-based approach is recommended.

  10. Sensor trustworthiness in uncertain time varying stochastic environments

    NASA Astrophysics Data System (ADS)

    Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan

    2011-06-01

    Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.

  11. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  12. Sustainability assessment through analogical models: The approach of aerobic living-organism

    NASA Astrophysics Data System (ADS)

    Dassisti, Michele

    2014-10-01

    The most part of scientific discoveries of human being borrow ideas and inspiration from nature. This point gives the rationale of the sustainability assessment approach presented here and based on the aerobic living organism (ALO) already developed by the author, which funds on the basic assumption that it is reasonable and effective to refer to the analogy between an system organized by human (say, manufacturing system, enterprise, etc.) for several decision-making scopes. The critical review of the ALO conceptual model already developed is here discussed through an example of an Italian small enterprise manufacturing metal components for civil furniture to assess its feasibility for sustainability appraisal.

  13. From particle systems to learning processes. Comment on "Collective learning modeling based on the kinetic theory of active particles" by Diletta Burini, Silvana De Lillo, and Livio Gibelli

    NASA Astrophysics Data System (ADS)

    Lachowicz, Mirosław

    2016-03-01

    The very stimulating paper [6] discusses an approach to perception and learning in a large population of living agents. The approach is based on a generalization of kinetic theory methods in which the interactions between agents are described in terms of game theory. Such an approach was already discussed in Ref. [2-4] (see also references therein) in various contexts. The processes of perception and learning are based on the interactions between agents and therefore the general kinetic theory is a suitable tool for modeling them. However the main question that rises is how the perception and learning processes may be treated in the mathematical modeling. How may we precisely deliver suitable mathematical structures that are able to capture various aspects of perception and learning?

  14. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  15. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  16. Regionalization of land use impact models for life cycle assessment: Recommendations for their use on the global scale and their applicability to Brazil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavan, Ana Laura Raymundo, E-mail: laurarpavan@gmail.com; Ometto, Aldo Roberto; Department of Production Engineering, São Carlos School of Engineering, University of São Paulo, Av. Trabalhador São-Carlense 400, São Carlos 13566-590, SP

    Life Cycle Assessment (LCA) is the main technique for evaluate the environmental impacts of product life cycles. A major challenge in the field of LCA is spatial and temporal differentiation in Life Cycle Impact Assessment (LCIA) methods, especially impacts resulting from land occupation and land transformation. Land use characterization modeling has advanced considerably over the last two decades and many approaches have recently included crucial aspects such as geographic differentiation. Nevertheless, characterization models have so far not been systematically reviewed and evaluated to determine their applicability to South America. Given that Brazil is the largest country in South America, thismore » paper analyzes the main international characterization models currently available in the literature, with a view to recommending regionalized models applicable on a global scale for land use life cycle impact assessments, and discusses their feasibility for regionalized assessment in Brazil. The analytical methodology involves classification based on the following criteria: midpoint/endpoint approach, scope of application, area of data collection, biogeographical differentiation, definition of recovery time and reference situation; followed by an evaluation of thirteen scientific robustness and environmental relevance subcriteria. The results of the scope of application are distributed among 25% of the models developed for the European context, and 50% have a global scope. There is no consensus in the literature about the definition of parameters such biogeographical differentiation and reference situation, and our review indicates that 35% of the models use ecoregion division while 40% use the concept of potential natural vegetation. Four characterization models show high scores in terms of scientific robustness and environmental relevance. These models are recommended for application in land use life cycle impact assessments, and also to serve as references for the development or adaptation of regional methodological procedures for Brazil. - Highlights: • A discussion is made on performing regionalized impact assessments using spatial differentiation in LCA. • A review is made of 20 characterization models for land use impacts in Life Cycle Impact Assessment. • Four characterization models are recommended according to different land use impact pathways for application in Brazil.« less

  17. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  18. Design and tolerance analysis of a transmission sphere by interferometer model

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao

    2015-09-01

    The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.

  19. Modeling and Simulation Verification, Validation and Accreditation (VV&A): A New Undertaking for the Exploration Systems Mission Directorate

    NASA Technical Reports Server (NTRS)

    Prill, Mark E.

    2005-01-01

    and Accreditation (VV&A) session audience, a snapshot review of the Exploration Space Mission Directorate s (ESMD) investigation into implementation of a modeling and simulation (M&S) VV&A program. The presentation provides some legacy ESMD reference material, including information on the then-current organizational structure, and M&S (Simulation Based Acquisition (SBA)) focus contained therein, to provide a context for the proposed M&S VV&A approach. This reference material briefly highlights the SBA goals and objectives, and outlines FY05 M&S development and implementation consistent with the Subjective Assessment, Constructive Assessment, Operator-in-the-Loop Assessment, Hardware-in-the-Loop Assessment, and In Service Operations Assessment M&S construct, the NASA Exploration Information Ontology Model (NExIOM) data model, and integration with the Windchill-based Integrated Collaborative Environment (ICE). The presentation then addresses the ESMD team s initial conclusions regarding an M&S VV&A program, summarizes the general VV&A implementation approach anticipated, and outlines some of the recognized VV&A program challenges, all within a broader context of the overarching Integrated Modeling and Simulation (IM&S) environment at both the ESMD and Agency (NASA) levels. The presentation concludes with a status on the current M&S organization s progress to date relative to the recommended IM&S implementation activity. The overall presentation was focused to provide, for the Verification, Validation,

  20. A Flexible and Accurate Genotype Imputation Method for the Next Generation of Genome-Wide Association Studies

    PubMed Central

    Howie, Bryan N.; Donnelly, Peter; Marchini, Jonathan

    2009-01-01

    Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project) will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2) that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%–20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions. PMID:19543373

  1. A mathematical representation of an advanced helicopter for piloted simulator investigations of control system and display variations

    NASA Technical Reports Server (NTRS)

    Aiken, E. W.

    1980-01-01

    A mathematical model of an advanced helicopter is described. The model is suitable for use in control/display research involving piloted simulation. The general design approach for the six degree of freedom equations of motion is to use the full set of nonlinear gravitational and inertial terms of the equations and to express the aerodynamic forces and moments as the reference values and first order terms of a Taylor series expansion about a reference trajectory defined as a function of longitudinal airspeed. Provisions for several different specific and generic flight control systems are included in the model. The logic required to drive various flight control and weapon delivery symbols on a pilot's electronic display is also provided. Finally, the model includes a simplified representation of low altitude wind and turbulence effects. This model was used in a piloted simulator investigation of the effects of control system and display variations for an attack helicopter mission.

  2. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  3. Diffusion-controlled reference material for VOC emissions testing: proof of concept.

    PubMed

    Cox, S S; Liu, Z; Little, J C; Howard-Reed, C; Nabinger, S J; Persily, A

    2010-10-01

    Because of concerns about indoor air quality, there is growing awareness of the need to reduce the rate at which indoor materials and products emit volatile organic compounds (VOCs). To meet consumer demand for low emitting products, manufacturers are increasingly submitting materials to independent laboratories for emissions testing. However, the same product tested by different laboratories can result in very different emissions profiles because of a general lack of test validation procedures. There is a need for a reference material that can be used as a known emissions source and that will have the same emission rate when tested by different laboratories under the same conditions. A reference material was created by loading toluene into a polymethyl pentene film. A fundamental emissions model was used to predict the toluene emissions profile. Measured VOC emissions profiles using small-chamber emissions tests compared reasonably well to the emissions profile predicted using the emissions model, demonstrating the feasibility of the proposed approach to create a diffusion-controlled reference material. To calibrate emissions test chambers and improve the reproducibility of VOC emission measurements among different laboratories, a reference material has been created using a polymer film loaded with a representative VOC. Initial results show that the film's VOC emission profile measured in a conventional test chamber compares well to predictions based on independently determined material/chemical properties and a fundamental emissions model. The use of such reference materials has the potential to build consensus and confidence in emissions testing as well as 'level the playing field' for product testing laboratories and manufacturers.

  4. Business Process Modelling is an Essential Part of a Requirements Analysis. Contribution of EFMI Primary Care Working Group.

    PubMed

    de Lusignan, S; Krause, P; Michalakidis, G; Vicente, M Tristan; Thompson, S; McGilchrist, M; Sullivan, F; van Royen, P; Agreus, L; Desombre, T; Taweel, A; Delaney, B

    2012-01-01

    To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data. We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis. Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories. Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.

  5. Least Squares Solution of Small Sample Multiple-Master PSInSAR System

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ding, Xiao Li; Lu, Zhong

    2010-03-01

    In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.

  6. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  7. Electrophysiological Responses to Expectancy Violations in Semantic and Gambling Tasks: A Comparison of Different EEG Reference Approaches

    PubMed Central

    Li, Ya; Wang, Yongchun; Zhang, Baoqiang; Wang, Yonghui; Zhou, Xiaolin

    2018-01-01

    Dynamically evaluating the outcomes of our actions and thoughts is a fundamental cognitive ability. Given its excellent temporal resolution, the event-related potential (ERP) technology has been used to address this issue. The feedback-related negativity (FRN) component of ERPs has been studied intensively with the averaged linked mastoid reference method (LM). However, it is unknown whether FRN can be induced by an expectancy violation in an antonym relations context and whether LM is the most suitable reference approach. To address these issues, the current research directly compared the ERP components induced by expectancy violations in antonym expectation and gambling tasks with a within-subjects design and investigated the effect of the reference approach on the experimental effects. Specifically, we systematically compared the influence of the LM, reference electrode standardization technique (REST) and average reference (AVE) approaches on the amplitude, scalp distribution and magnitude of ERP effects as a function of expectancy violation type. The expectancy deviation in the antonym expectation task elicited an N400 effect that differed from the FRN effect induced in the gambling task; this difference was confirmed by all the three reference methods. Both the amplitudes of the ERP effects (N400 and FRN) and the magnitude as the expectancy violation increased were greater under the LM approach than those under the REST approach, followed by those under the AVE approach. Based on the statistical results, the electrode sites that showed the N400 and FRN effects critically depended on the reference method, and the results of the REST analysis were consistent with previous ERP studies. Combined with evidence from simulation studies, we suggest that REST is an optional reference method to be used in future ERP data analysis. PMID:29615858

  8. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  9. Establishing gene models from the Pinus pinaster genome using gene capture and BAC sequencing.

    PubMed

    Seoane-Zonjic, Pedro; Cañas, Rafael A; Bautista, Rocío; Gómez-Maldonado, Josefa; Arrillaga, Isabel; Fernández-Pozo, Noé; Claros, M Gonzalo; Cánovas, Francisco M; Ávila, Concepción

    2016-02-27

    In the era of DNA throughput sequencing, assembling and understanding gymnosperm mega-genomes remains a challenge. Although drafts of three conifer genomes have recently been published, this number is too low to understand the full complexity of conifer genomes. Using techniques focused on specific genes, gene models can be established that can aid in the assembly of gene-rich regions, and this information can be used to compare genomes and understand functional evolution. In this study, gene capture technology combined with BAC isolation and sequencing was used as an experimental approach to establish de novo gene structures without a reference genome. Probes were designed for 866 maritime pine transcripts to sequence genes captured from genomic DNA. The gene models were constructed using GeneAssembler, a new bioinformatic pipeline, which reconstructed over 82% of the gene structures, and a high proportion (85%) of the captured gene models contained sequences from the promoter regulatory region. In a parallel experiment, the P. pinaster BAC library was screened to isolate clones containing genes whose cDNA sequence were already available. BAC clones containing the asparagine synthetase, sucrose synthase and xyloglucan endotransglycosylase gene sequences were isolated and used in this study. The gene models derived from the gene capture approach were compared with the genomic sequences derived from the BAC clones. This combined approach is a particularly efficient way to capture the genomic structures of gene families with a small number of members. The experimental approach used in this study is a valuable combined technique to study genomic gene structures in species for which a reference genome is unavailable. It can be used to establish exon/intron boundaries in unknown gene structures, to reconstruct incomplete genes and to obtain promoter sequences that can be used for transcriptional studies. A bioinformatics algorithm (GeneAssembler) is also provided as a Ruby gem for this class of analyses.

  10. The hadronic interaction model EPOS

    NASA Astrophysics Data System (ADS)

    Werner, Klaus

    2008-01-01

    EPOS is a sophisticated multiple scattering approach based on partons and Pomerons (parton ladders), with special emphasis on high parton densities. The latter aspect, particularly important in proton-nucleus or nucleus-nucleus collisions, is taken care of via an effective treatment of Pomeron-Pomeron interactions, referred to as parton ladder splitting. In addition, collective effects are introduced after separating the high density central core from the peripheral corona. EPOS is the successor of the NEXUS model.

  11. A hybrid multigroup neutron-pattern model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.

  12. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  13. Survey of air cargo forecasting techniques

    NASA Technical Reports Server (NTRS)

    Kuhlthan, A. R.; Vermuri, R. S.

    1978-01-01

    Forecasting techniques currently in use in estimating or predicting the demand for air cargo in various markets are discussed with emphasis on the fundamentals of the different forecasting approaches. References to specific studies are cited when appropriate. The effectiveness of current methods is evaluated and several prospects for future activities or approaches are suggested. Appendices contain summary type analyses of about 50 specific publications on forecasting, and selected bibliographies on air cargo forecasting, air passenger demand forecasting, and general demand and modalsplit modeling.

  14. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  15. Fusion of Location Fingerprinting and Trilateration Based on the Example of Differential Wi-Fi Positioning

    NASA Astrophysics Data System (ADS)

    Retscher, G.

    2017-09-01

    Positioning of mobile users in indoor environments with Wireless Fidelity (Wi-Fi) has become very popular whereby location fingerprinting and trilateration are the most commonly employed methods. In both the received signal strength (RSS) of the surrounding access points (APs) are scanned and used to estimate the user's position. Within the scope of this study the advantageous qualities of both methods are identified and selected to benefit their combination. By a fusion of these technologies a higher performance for Wi-Fi positioning is achievable. For that purpose, a novel approach based on the well-known Differential GPS (DGPS) principle of operation is developed and applied. This approach for user localization and tracking is termed Differential Wi-Fi (DWi-Fi) by analogy with DGPS. From reference stations deployed in the area of interest differential measurement corrections are derived and applied at the mobile user side. Hence, range or coordinate corrections can be estimated from a network of reference station observations as it is done in common CORS GNSS networks. A low-cost realization with Raspberry Pi units is employed for these reference stations. These units serve at the same time as APs broadcasting Wi-Fi signals as well as reference stations scanning the receivable Wi-Fi signals of the surrounding APs. As the RSS measurements are carried out continuously at the reference stations dynamically changing maps of RSS distributions, so-called radio maps, are derived. Similar as in location fingerprinting this radio maps represent the RSS fingerprints at certain locations. From the areal modelling of the correction parameters in combination with the dynamically updated radio maps the location of the user can be estimated in real-time. The novel approach is presented and its performance demonstrated in this paper.

  16. A comprehensive approach to identify reliable reference gene candidates to investigate the link between alcoholism and endocrinology in Sprague-Dawley rats.

    PubMed

    Taki, Faten A; Abdel-Rahman, Abdel A; Zhang, Baohong

    2014-01-01

    Gender and hormonal differences are often correlated with alcohol dependence and related complications like addiction and breast cancer. Estrogen (E2) is an important sex hormone because it serves as a key protein involved in organism level signaling pathways. Alcoholism has been reported to affect estrogen receptor signaling; however, identifying the players involved in such multi-faceted syndrome is complex and requires an interdisciplinary approach. In many situations, preliminary investigations included a straight forward, yet informative biotechniques such as gene expression analyses using quantitative real time PCR (qRT-PCR). The validity of qRT-PCR-based conclusions is affected by the choice of reliable internal controls. With this in mind, we compiled a list of 15 commonly used housekeeping genes (HKGs) as potential reference gene candidates in rat biological models. A comprehensive comparison among 5 statistical approaches (geNorm, dCt method, NormFinder, BestKeeper, and RefFinder) was performed to identify the minimal number as well the most stable reference genes required for reliable normalization in experimental rat groups that comprised sham operated (SO), ovariectomized rats in the absence (OVX) or presence of E2 (OVXE2). These rat groups were subdivided into subgroups that received alcohol in liquid diet or isocalroic control liquid diet for 12 weeks. Our results showed that U87, 5S rRNA, GAPDH, and U5a were the most reliable gene candidates for reference genes in heart and brain tissue. However, different gene stability ranking was specific for each tissue input combination. The present preliminary findings highlight the variability in reference gene rankings across different experimental conditions and analytic methods and constitute a fundamental step for gene expression assays.

  17. An Equivalent cross-section Framework for improving computational efficiency in Distributed Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish

    2014-05-01

    While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.

  18. Measuring Emergent Organizational Properties: A Structural Equation Modeling Test of Self- versus Group-Referent Perceptions

    ERIC Educational Resources Information Center

    Goddard, Roger D.; LoGerfo, Laura F.

    2007-01-01

    This article presents a theoretical rationale and empirical evidence regarding the validity of scores obtained from two competing approaches to operationalizing scale items to measure emergent organizational properties. The authors consider whether items in scales intended to measure organizational properties should prompt survey takers to provide…

  19. An Electronic Service Quality Reference Model for Designing E-Commerce Websites Which Maximizes Customer Satisfaction

    ERIC Educational Resources Information Center

    Shaheen, Amer N.

    2011-01-01

    This research investigated Electronic Service Quality (E-SQ) features that contribute to customer satisfaction in an online environment. The aim was to develop an approach which improves E-CRM processes and enhances online customer satisfaction. The research design adopted mixed methods involving qualitative and quantitative methods to…

  20. A System Dynamics Approach to Modelling the Degradation of Biochemical Oxygen Demand in A Constructed Wetland Receiving Stormwater Runoff

    DTIC Science & Technology

    1995-12-01

    are often collectively referred to as bacteria or prokaryotes. The eukaryote group consists of plants, animals and protists (algae, fungi and...primary microorganisms important in the treatment of wastewaters are the protists from the eukaryote group, and bacteria, or prokaryotes. Bacteria are

  1. A Potted History of PPP with the Help of "ELT Journal"

    ERIC Educational Resources Information Center

    Anderson, Jason

    2017-01-01

    This article charts the chequered history of the PPP model (Presentation, Practice, Production) in English language teaching, told partly through reference to articles in "ELT Journal." As well as documenting its origins at the dawn of communicative language teaching (and not in audiolingual approaches, as some have suggested), I chart…

  2. Towards Model-Driven End-User Development in CALL

    ERIC Educational Resources Information Center

    Farmer, Rod; Gruba, Paul

    2006-01-01

    The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…

  3. Using Landscape Hierarchies To Guide Restoration Of Disturbed Ecosystems

    Treesearch

    Brian J. Palik; Charles P. Goebel; Katherine L. Kirkman; Larry West

    2000-01-01

    Reestablishing native plant communities is an important focus of ecosystem restoration. In complex landscapes containing a diversity of ecosystem types, restoration requires a set of reference vegetation conditions for the ecosystems of concern, and a predictive model to relate plant community composition to physical variables. Restoration also requires an approach for...

  4. A GENERATIVE SKETCH OF BURMESE.

    ERIC Educational Resources Information Center

    BURLING, ROBBINS

    ASSUMING THAT A GENERATIVE APPROACH PROVIDES A FAIRLY DIRECT AND SIMPLE DESCRIPTION OF LINGUISTIC DATA, THE AUTHOR TAKES A TRADITIONAL BURMESE GRAMMAR (W. CORNYN'S "OUTLINE OF BURMESE GRAMMAR," REFERRED TO AS OBG THROUGHOUT THE PAPER) AND REWORKS IT INTO A GENERATIVE FRAMEWORK BASED ON A MODEL BY CHOMSKY. THE STUDY IS DIVIDED INTO FIVE SECTIONS,…

  5. Presentation-Practice-Production and Task-Based Learning in the Light of Second Language Learning Theories.

    ERIC Educational Resources Information Center

    Ritchie, Graeme

    2003-01-01

    Features of presentation-practice-production (PPP) and task-based learning (TBL) models for language teaching are discussed with reference to language learning theories. Pre-selection of target structures, use of controlled repetition, and explicit grammar instruction in a PPP lesson are given. Suggests TBL approaches afford greater learning…

  6. The Controversial Classroom: Institutional Resources and Pedagogical Strategies for a Race Relations Course.

    ERIC Educational Resources Information Center

    Wahl, Ana-Maria; Perez, Eduardo T.; Deegan, Mary Jo; Sanchez, Thomas W.; Applegate, Cheryl

    2000-01-01

    Offers a model for a collective strategy that can be used to deal more effectively with problems associated with race relations courses. Presents a multidimensional analysis of the constraints that create problems for race relations instructors and highlights a multidimensional approach to minimizing these problems. Includes references. (CMK)

  7. Reflective Lesson Planning in Refresher Training Programs for Experienced Physics Teachers.

    ERIC Educational Resources Information Center

    Chung, C. M.; And Others

    1995-01-01

    Reports on a refresher training program that introduces experienced physics teachers to a reflective lesson-planning model and a more constructivist approach to physics teaching. Three instructional strategies developed by participants in the program and the corresponding suggestions made by their peers are presented and analyzed. (29 references)…

  8. Integrated driver modelling considering state transition feature for individual adaptation of driver assistance systems

    NASA Astrophysics Data System (ADS)

    Raksincharoensak, Pongsathorn; Khaisongkram, Wathanyoo; Nagai, Masao; Shimosaka, Masamichi; Mori, Taketoshi; Sato, Tomomasa

    2010-12-01

    This paper describes the modelling of naturalistic driving behaviour in real-world traffic scenarios, based on driving data collected via an experimental automobile equipped with a continuous sensing drive recorder. This paper focuses on the longitudinal driving situations which are classified into five categories - car following, braking, free following, decelerating and stopping - and are referred to as driving states. Here, the model is assumed to be represented by a state flow diagram. Statistical machine learning of driver-vehicle-environment system model based on driving database is conducted by a discriminative modelling approach called boosting sequential labelling method.

  9. The results of a limited study of approaches to the design, fabrication, and testing of a dynamic model of the NASA IOC space station. Executive summary

    NASA Technical Reports Server (NTRS)

    Brooks, George W.

    1985-01-01

    The options for the design, construction, and testing of a dynamic model of the space station were evaluated. Since the definition of the space station structure is still evolving, the Initial Operating Capacity (IOC) reference configuration was used as the general guideline. The results of the studies treat: general considerations of the need for and use of a dynamic model; factors which deal with the model design and construction; and a proposed system for supporting the dynamic model in the planned Large Spacecraft Laboratory.

  10. An integrative formal model of motivation and decision making: The MGPM*.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew

    2016-09-01

    We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Evaluating the adequacy of a reference site pool for ecological assessments in environmentally complex regions

    USGS Publications Warehouse

    Ode, Peter R.; Rehn, Andrew C.; Mazor, Raphael D.; Schiff, Kenneth C.; Stein, Eric D.; May, Jason; Brown, Larry R.; Herbst, David B.; Gillette, D.D.; Lunde, Kevin; Hawkins, Charles P.

    2016-01-01

    Many advances in the field of bioassessment have focused on approaches for objectively selecting the pool of reference sites used to establish expectations for healthy waterbodies, but little emphasis has been placed on ways to evaluate the suitability of the reference-site pool for its intended applications (e.g., compliance assessment vs ambient monitoring). These evaluations are critical because an inadequately evaluated reference pool may bias assessments in some settings. We present an approach for evaluating the adequacy of a reference-site pool for supporting biotic-index development in environmentally heterogeneous and pervasively altered regions. We followed common approaches for selecting sites with low levels of anthropogenic stress to screen 1985 candidate stream reaches to create a pool of 590 reference sites for assessing the biological integrity of streams in California, USA. We assessed the resulting pool of reference sites against 2 performance criteria. First, we evaluated how well the reference-site pool represented the range of natural gradients present in the entire population of streams as estimated by sites sampled through probabilistic surveys. Second, we evaluated the degree to which we were successful in rejecting sites influenced by anthropogenic stress by comparing biological metric scores at reference sites with the most vs fewest potential sources of stress. Using this approach, we established a reference-site pool with low levels of human-associated stress and broad coverage of environmental heterogeneity. This approach should be widely applicable and customizable to particular regional or programmatic needs.

  12. The openEHR Java reference implementation project.

    PubMed

    Chen, Rong; Klein, Gunnar

    2007-01-01

    The openEHR foundation has developed an innovative design for interoperable and future-proof Electronic Health Record (EHR) systems based on a dual model approach with a stable reference information model complemented by archetypes for specific clinical purposes.A team from Sweden has implemented all the stable specifications in the Java programming language and donated the source code to the openEHR foundation. It was adopted as the openEHR Java Reference Implementation in March 2005 and released under open source licenses. This encourages early EHR implementation projects around the world and a number of groups have already started to use this code. The early Java implementation experience has also led to the publication of the openEHR Java Implementation Technology Specification. A number of design changes to the specifications and important minor corrections have been directly initiated by the implementation project over the last two years. The Java Implementation has been important for the validation and improvement of the openEHR design specifications and provides building blocks for future EHR systems.

  13. A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.

    PubMed

    Workman, Jerome J

    2018-03-01

    Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.

  14. Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI

    NASA Astrophysics Data System (ADS)

    Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.

    2004-05-01

    We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.

  15. Amicus Plato, sed magis amica veritas: plots must obey the laws they refer to and models shall describe biophysical reality!

    PubMed

    Katkov, Igor I

    2011-06-01

    In the companion paper, we discussed in details proper linearization, calculation of the inactive osmotic volume, and analysis of the results on the Boyle-vant' Hoff plots. In this Letter, we briefly address some common errors and misconceptions in osmotic modeling and propose some approaches, namely: (1) inapplicability of the Kedem-Katchalsky formalism model in regards to the cryobiophysical reality, (2) calculation of the membrane hydraulic conductivity L(p) in the presence of permeable solutes, (3) proper linearization of the Arrhenius plots for the solute membrane permeability, (4) erroneous use of the term "toxicity" for the cryoprotective agents, and (5) advantages of the relativistic permeability approach (RP) developed by us vs. traditional ("classic") 2-parameter model. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Singlet-paired coupled cluster theory for open shells

    NASA Astrophysics Data System (ADS)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2016-06-01

    Restricted single-reference coupled cluster theory truncated to single and double excitations accurately describes weakly correlated systems, but often breaks down in the presence of static or strong correlation. Good coupled cluster energies in the presence of degeneracies can be obtained by using a symmetry-broken reference, such as unrestricted Hartree-Fock, but at the cost of good quantum numbers. A large body of work has shown that modifying the coupled cluster ansatz allows for the treatment of strong correlation within a single-reference, symmetry-adapted framework. The recently introduced singlet-paired coupled cluster doubles (CCD0) method is one such model, which recovers correct behavior for strong correlation without requiring symmetry breaking in the reference. Here, we extend singlet-paired coupled cluster for application to open shells via restricted open-shell singlet-paired coupled cluster singles and doubles (ROCCSD0). The ROCCSD0 approach retains the benefits of standard coupled cluster theory and recovers correct behavior for strongly correlated, open-shell systems using a spin-preserving ROHF reference.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    Restricted single-reference coupled cluster theory truncated to single and double excitations accurately describes weakly correlated systems, but often breaks down in the presence of static or strong correlation. Good coupled cluster energies in the presence of degeneracies can be obtained by using a symmetry-broken reference, such as unrestricted Hartree-Fock, but at the cost of good quantum numbers. A large body of work has shown that modifying the coupled cluster ansatz allows for the treatment of strong correlation within a single-reference, symmetry-adapted framework. The recently introduced singlet-paired coupled cluster doubles (CCD0) method is one such model, which recovers correct behavior formore » strong correlation without requiring symmetry breaking in the reference. Here, we extend singlet-paired coupled cluster for application to open shells via restricted open-shell singlet-paired coupled cluster singles and doubles (ROCCSD0). The ROCCSD0 approach retains the benefits of standard coupled cluster theory and recovers correct behavior for strongly correlated, open-shell systems using a spin-preserving ROHF reference.« less

  18. Rapid discrimination and quantification of alkaloids in Corydalis Tuber by near-infrared spectroscopy.

    PubMed

    Lu, Hai-yan; Wang, Shi-sheng; Cai, Rui; Meng, Yu; Xie, Xin; Zhao, Wei-jie

    2012-02-05

    With the application of near-infrared spectroscopy (NIRS), a convenient and rapid method for determination of alkaloids in Corydalis Tuber extract and classification for samples from different locations have been developed. Five different samples were collected according to their geographical origin, 2-Der with smoothing point of 17 was applied as the spectral pre-treatment, and the 1st to scaling range algorithm was adjusted to be optimal approach, classification model was constructed over the wavelength range of 4582-4270 cm⁻¹, 5562-4976 cm⁻¹ and 7000-7467 cm⁻¹ with a great recognition rate. For prediction model, partial least squares (PLS) algorithm was utilized referring to HPLC-UV reference method, the optimum models were obtained after adjustment. Pre-processing methods of calibration models were COE for protopine and min-max normalization for palmatine and MSC for tetrahydropalmatine, respectively. The root mean square errors of cross-validation (RMSECV) for protopine, palmatine, tetrahydropalmatine were 0.884, 1.83, 3.23 mg/g. The correlation coefficients (R²) were 99.75, 98.41 and 97.34%. T test was applied, in the model of tetrahydropalmatine; there is no significant difference between NIR prediction and HPLC reference method at 95% confidence interval with t=0.746

  19. Phase-field modeling of stress-induced instabilities

    NASA Astrophysics Data System (ADS)

    Kassner, Klaus; Misbah, Chaouqi; Müller, Judith; Kappey, Jens; Kohlert, Peter

    2001-03-01

    A phase-field approach describing the dynamics of a strained solid in contact with its melt is developed. Using a formulation that is independent of the state of reference chosen for the displacement field, we write down the elastic energy in an unambiguous fashion, thus obtaining an entire class of models. According to the choice of reference state, the particular model emerging from this class will become equivalent to one of the two independently constructed models on which brief accounts have been given recently [J. Müller and M. Grant, Phys. Rev. Lett. 82, 1736 (1999); K. Kassner and C. Misbah, Europhys. Lett. 46, 217 (1999)]. We show that our phase-field approach recovers the sharp-interface limit corresponding to the continuum model equations describing the Asaro-Tiller-Grinfeld instability. Moreover, we use our model to derive hitherto unknown sharp-interface equations for a situation including a field of body forces. The numerical utility of the phase-field approach is demonstrated by reproducing some known results and by comparison with a sharp-interface simulation. We then proceed to investigate the dynamics of extended systems within the phase-field model which contains an inherent lower length cutoff, thus avoiding cusp singularities. It is found that a periodic array of grooves generically evolves into a superstructure which arises from a series of imperfect period doublings. For wave numbers close to the fastest-growing mode of the linear instability, the first period doubling can be obtained analytically. Both the dynamics of an initially periodic array and a random initial structure can be described as a coarsening process with winning grooves temporarily accelerating whereas losing ones decelerate and even reverse their direction of motion. In the absence of gravity, the end state of a laterally finite system is a single groove growing at constant velocity, as long as no secondary instabilities arise (that we have not been able to see with our code). With gravity, several grooves are possible, all of which are bound to stop eventually. A laterally infinite system approaches a scaling state in the absence of gravity and probably with gravity, too.

  20. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  1. From concepts, theory, and evidence of heterogeneity of treatment effects to methodological approaches: a primer.

    PubMed

    Willke, Richard J; Zheng, Zhiyuan; Subedi, Prasun; Althin, Rikard; Mullins, C Daniel

    2012-12-13

    Implicit in the growing interest in patient-centered outcomes research is a growing need for better evidence regarding how responses to a given intervention or treatment may vary across patients, referred to as heterogeneity of treatment effect (HTE). A variety of methods are available for exploring HTE, each associated with unique strengths and limitations. This paper reviews a selected set of methodological approaches to understanding HTE, focusing largely but not exclusively on their uses with randomized trial data. It is oriented for the "intermediate" outcomes researcher, who may already be familiar with some methods, but would value a systematic overview of both more and less familiar methods with attention to when and why they may be used. Drawing from the biomedical, statistical, epidemiological and econometrics literature, we describe the steps involved in choosing an HTE approach, focusing on whether the intent of the analysis is for exploratory, initial testing, or confirmatory testing purposes. We also map HTE methodological approaches to data considerations as well as the strengths and limitations of each approach. Methods reviewed include formal subgroup analysis, meta-analysis and meta-regression, various types of predictive risk modeling including classification and regression tree analysis, series of n-of-1 trials, latent growth and growth mixture models, quantile regression, and selected non-parametric methods. In addition to an overview of each HTE method, examples and references are provided for further reading.By guiding the selection of the methods and analysis, this review is meant to better enable outcomes researchers to understand and explore aspects of HTE in the context of patient-centered outcomes research.

  2. Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Aladejare, Adeyemi Emman

    2016-09-01

    Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.

  3. Elements of episodic-like memory in animal models.

    PubMed

    Crystal, Jonathon D

    2009-03-01

    Representations of unique events from one's past constitute the content of episodic memories. A number of studies with non-human animals have revealed that animals remember specific episodes from their past (referred to as episodic-like memory). The development of animal models of memory holds enormous potential for gaining insight into the biological bases of human memory. Specifically, given the extensive knowledge of the rodent brain, the development of rodent models of episodic memory would open new opportunities to explore the neuroanatomical, neurochemical, neurophysiological, and molecular mechanisms of memory. Development of such animal models holds enormous potential for studying functional changes in episodic memory in animal models of Alzheimer's disease, amnesia, and other human memory pathologies. This article reviews several approaches that have been used to assess episodic-like memory in animals. The approaches reviewed include the discrimination of what, where, and when in a radial arm maze, dissociation of recollection and familiarity, object recognition, binding, unexpected questions, and anticipation of a reproductive state. The diversity of approaches may promote the development of converging lines of evidence on the difficult problem of assessing episodic-like memory in animals.

  4. A brief introduction to mixed effects modelling and multi-model inference in ecology

    PubMed Central

    Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961

  5. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    PubMed

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  6. Comparison of different synthetic 5-min rainfall time series regarding their suitability for urban drainage modelling

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András

    2015-04-01

    For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.

  7. On some methods for improving time of reachability sets computation for the dynamic system control problem

    NASA Astrophysics Data System (ADS)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  8. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl; Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven; Daude, F.

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splittingmore » approach. The results are in good agreement with reference results and exact solutions.« less

  9. How do reference montage and electrodes setup affect the measured scalp EEG potentials?

    NASA Astrophysics Data System (ADS)

    Hu, Shiang; Lai, Yongxiu; Valdes-Sosa, Pedro A.; Bringas-Vega, Maria L.; Yao, Dezhong

    2018-04-01

    Objective. Human scalp electroencephalogram (EEG) is widely applied in cognitive neuroscience and clinical studies due to its non-invasiveness and ultra-high time resolution. However, the representativeness of the measured EEG potentials for the underneath neural activities is still a problem under debate. This study aims to investigate systematically how both reference montage and electrodes setup affect the accuracy of EEG potentials. Approach. First, the standard EEG potentials are generated by the forward calculation with a single dipole in the neural source space, for eleven channel numbers (10, 16, 21, 32, 64, 85, 96, 128, 129, 257, 335). Here, the reference is the ideal infinity implicitly determined by forward theory. Then, the standard EEG potentials are transformed to recordings with different references including five mono-polar references (Left earlobe, Fz, Pz, Oz, Cz), and three re-references (linked mastoids (LM), average reference (AR) and reference electrode standardization technique (REST)). Finally, the relative errors between the standard EEG potentials and the transformed ones are evaluated in terms of channel number, scalp regions, electrodes layout, dipole source position and orientation, as well as sensor noise and head model. Main results. Mono-polar reference recordings are usually of large distortions; thus, a re-reference after online mono-polar recording should be adopted in general to mitigate this effect. Among the three re-references, REST is generally superior to AR for all factors compared, and LM performs worst. REST is insensitive to head model perturbation. AR is subject to electrodes coverage and dipole orientation but no close relation with channel number. Significance. These results indicate that REST would be the first choice of re-reference and AR may be an alternative option for high level sensor noise case. Our findings may provide the helpful suggestions on how to obtain the EEG potentials as accurately as possible for cognitive neuroscientists and clinicians.

  10. Assessing the distinguishable cluster approximation based on the triple bond-breaking in the nitrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rishi, Varun; Perera, Ajith; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    2016-03-28

    Obtaining the correct potential energy curves for the dissociation of multiple bonds is a challenging problem for ab initio methods which are affected by the choice of a spin-restricted reference function. Coupled cluster (CC) methods such as CCSD (coupled cluster singles and doubles model) and CCSD(T) (CCSD + perturbative triples) correctly predict the geometry and properties at equilibrium but the process of bond dissociation, particularly when more than one bond is simultaneously broken, is much more complicated. New modifications of CC theory suggest that the deleterious role of the reference function can be diminished, provided a particular subset of termsmore » is retained in the CC equations. The Distinguishable Cluster (DC) approach of Kats and Manby [J. Chem. Phys. 139, 021102 (2013)], seemingly overcomes the deficiencies for some bond-dissociation problems and might be of use in quasi-degenerate situations in general. DC along with other approximate coupled cluster methods such as ACCD (approximate coupled cluster doubles), ACP-D45, ACP-D14, 2CC, and pCCSD(α, β) (all defined in text) falls under a category of methods that are basically obtained by the deletion of some quadratic terms in the double excitation amplitude equation for CCD/CCSD (coupled cluster doubles model/coupled cluster singles and doubles model). Here these approximate methods, particularly those based on the DC approach, are studied in detail for the nitrogen molecule bond-breaking. The N{sub 2} problem is further addressed with conventional single reference methods but based on spatial symmetry-broken restricted Hartree–Fock (HF) solutions to assess the use of these references for correlated calculations in the situation where CC methods using fully symmetry adapted SCF solutions fail. The distinguishable cluster method is generalized: 1) to different orbitals for different spins (unrestricted HF based DCD and DCSD), 2) by adding triples correction perturbatively (DCSD(T)) and iteratively (DCSDT-n), and 3) via an excited state approximation through the equation of motion (EOM) approach (EOM-DCD, EOM-DCSD). The EOM-CC method is used to identify lower-energy CC solutions to overcome singularities in the CC potential energy curves. It is also shown that UHF based CC and DC methods behave very similarly in bond-breaking of N{sub 2}, and that using spatially broken but spin preserving SCF references makes the CCSD solutions better than those for DCSD.« less

  11. Global retrieval of soil moisture and vegetation properties using data-driven methods

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann

    2017-04-01

    Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre for Medium-Range Weather Forecasts (ECMWF). Finally, retrievals using radiative transfer models can also be used as a reference SM dataset for the training phase. This approach was used to retrieve soil moisture from ASMR-E, as mentioned above, and also to implement the official European Space Agency (ESA) SMOS soil moisture product in Near-Real-Time. We will finish with a discussion of the retrieval of vegetation parameters from SMOS observations using data-driven methods.

  12. Planetary Dynamos

    NASA Astrophysics Data System (ADS)

    Gaur, Vinod K.

    The article begins with a reference to the first rational approaches to explaining the earth's magnetic field notably Elsasser's application of magneto-hydrodynamics, followed by brief outlines of the characteristics of planetary magnetic fields and of the potentially insightful homopolar dynamo in illuminating the basic issues: theoretical requirements of asymmetry and finite conductivity in sustaining the dynamo process. It concludes with sections on Dynamo modeling and, in particular, the Geo-dynamo, but not before some of the evocative physical processes mediated by the Lorentz force and the behaviour of a flux tube embedded in a perfectly conducting fluid, using Alfvén theorem, are explained, as well as the traditional intermediate approaches to investigating dynamo processes using the more tractable Kinematic models.

  13. Wind-tunnel results of the aerodynamic characteristics of a 1/8-scale model of a twin engine short-haul transport. [in Langley V/STOL tunnel

    NASA Technical Reports Server (NTRS)

    Paulson, J. W., Jr.

    1977-01-01

    A wind tunnel test was conducted in the Langley V/STOL tunnel to define the aerodynamic characteristics of a 1/8-scale twin-engine short haul transport. The model was tested in both the cruise and approach configurations with various control surfaces deflected. Data were obtained out of ground effect for the cruise configuration and both in and out of ground effect for the approach configuration. These data are intended to be a reference point to begin the analysis of the flight characteristics of the NASA terminal configured vehicle (TCV) and are presented without analysis.

  14. Neural network-based motion control of an underactuated wheeled inverted pendulum model.

    PubMed

    Yang, Chenguang; Li, Zhijun; Cui, Rongxin; Xu, Bugong

    2014-11-01

    In this paper, automatic motion control is investigated for one of wheeled inverted pendulum (WIP) models, which have been widely applied for modeling of a large range of two wheeled modern vehicles. First, the underactuated WIP model is decomposed into a fully actuated second order subsystem Σa consisting of planar movement of vehicle forward and yaw angular motions, and a nonactuated first order subsystem Σb of pendulum motion. Due to the unknown dynamics of subsystem Σa and the universal approximation ability of neural network (NN), an adaptive NN scheme has been employed for motion control of subsystem Σa . The model reference approach has been used whereas the reference model is optimized by the finite time linear quadratic regulation technique. The pendulum motion in the passive subsystem Σb is indirectly controlled using the dynamic coupling with planar forward motion of subsystem Σa , such that satisfactory tracking of a set pendulum tilt angle can be guaranteed. Rigours theoretic analysis has been established, and simulation studies have been performed to demonstrate the developed method.

  15. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    PubMed Central

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  16. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Novel Approaches for Phylogenetic Inference from Morphological Data and Total-Evidence Dating in Squamate Reptiles (Lizards, Snakes, and Amphisbaenians).

    PubMed

    Pyron, R Alexander

    2017-01-01

    Here, I combine previously underutilized models and priors to perform more biologically realistic phylogenetic inference from morphological data, with an example from squamate reptiles. When coding morphological characters, it is often possible to denote ordered states with explicit reference to observed or hypothetical ancestral conditions. Using this logic, we can integrate across character-state labels and estimate meaningful rates of forward and backward transitions from plesiomorphy to apomorphy. I refer to this approach as MkA, for “asymmetric.” The MkA model incorporates the biological reality of limited reversal for many phylogenetically informative characters, and significantly increases likelihoods in the empirical data sets. Despite this, the phylogeny of Squamata remains contentious. Total-evidence analyses using combined morphological and molecular data and the MkA approach tend toward recent consensus estimates supporting a nested Iguania. However, support for this topology is not unambiguous across data sets or analyses, and no mechanism has been proposed to explain the widespread incongruence between partitions, or the hidden support for various topologies in those partitions. Furthermore, different morphological data sets produced by different authors contain both different characters and different states for the same or similar characters, resulting in drastically different placements for many important fossil lineages. Effort is needed to standardize ontology for morphology, resolve incongruence, and estimate a robust phylogeny. The MkA approach provides a preliminary avenue for investigating morphological evolution while accounting for temporal evidence and asymmetry in character-state changes.

  18. Climate, orography and scale controls on flood frequency in Triveneto (Italy)

    NASA Astrophysics Data System (ADS)

    Persiano, Simone; Castellarin, Attilio; Salinas, Jose Luis; Domeneghetti, Alessio; Brath, Armando

    2016-05-01

    The growing concern about the possible effects of climate change on flood frequency regime is leading Authorities to review previously proposed reference procedures for design-flood estimation, such as national flood frequency models. Our study focuses on Triveneto, a broad geographical region in North-eastern Italy. A reference procedure for design flood estimation in Triveneto is available from the Italian NCR research project "VA.PI.", which considered Triveneto as a single homogeneous region and developed a regional model using annual maximum series (AMS) of peak discharges that were collected up to the 1980s by the former Italian Hydrometeorological Service. We consider a very detailed AMS database that we recently compiled for 76 catchments located in Triveneto. All 76 study catchments are characterized in terms of several geomorphologic and climatic descriptors. The objective of our study is threefold: (1) to inspect climatic and scale controls on flood frequency regime; (2) to verify the possible presence of changes in flood frequency regime by looking at changes in time of regional L-moments of annual maximum floods; (3) to develop an updated reference procedure for design flood estimation in Triveneto by using a focused-pooling approach (i.e. Region of Influence, RoI). Our study leads to the following conclusions: (1) climatic and scale controls on flood frequency regime in Triveneto are similar to the controls that were recently found in Europe; (2) a single year characterized by extreme floods can have a remarkable influence on regional flood frequency models and analyses for detecting possible changes in flood frequency regime; (3) no significant change was detected in the flood frequency regime, yet an update of the existing reference procedure for design flood estimation is highly recommended and we propose the RoI approach for properly representing climate and scale controls on flood frequency in Triveneto, which cannot be regarded as a single homogeneous region.

  19. A Scalable Approach to Modeling Cascading Risk in the MDAP Network

    DTIC Science & Technology

    2014-04-30

    our future work. References Asuncion, A., Welling, M., Smyth, P., & Teh , P. Y. ( 2009 ). On Smoothing and inference for Topic Models, Proceeding of...modeling can be measured using a factor called perplexity (Asuncion et al., 2009 ). Perplexity is a measure of model’s ability to infer the topics in...fåÑçêãÉÇ=`Ü~åÖÉ= = - 307 - Table 5. Funding Table for FY 2007 From PE_abc (From 2009 Document) Programs Cost ($ in millions) for FY 2007 Non-MDAP_a

  20. FRAP Analysis: Accounting for Bleaching during Image Capture

    PubMed Central

    Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.

    2012-01-01

    The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750

  1. MaCH-Admix: Genotype Imputation for Admixed Populations

    PubMed Central

    Liu, Eric Yi; Li, Mingyao; Wang, Wei; Li, Yun

    2012-01-01

    Imputation in admixed populations is an important problem but challenging due to the complex linkage disequilibrium (LD) pattern. The emergence of large reference panels such as that from the 1,000 Genomes Project enables more accurate imputation in general, and in particular for admixed populations and for uncommon variants. To efficiently benefit from these large reference panels, one key issue to consider in modern genotype imputation framework is the selection of effective reference panels. In this work, we consider a number of methods for effective reference panel construction inside a hidden Markov model and specific to each target individual. These methods fall into two categories: identity-by-state (IBS) based and ancestry-weighted approach. We evaluated the performance on individuals from recently admixed populations. Our target samples include 8,421 African Americans and 3,587 Hispanic Americans from the Women’s Health Initiative, which allow assessment of imputation quality for uncommon variants. Our experiments include both large and small reference panels; large, medium, and small target samples; and in genome regions of varying levels of LD. We also include BEAGLE and IMPUTE2 for comparison. Experiment results with large reference panel suggest that our novel piecewise IBS method yields consistently higher imputation quality than other methods/software. The advantage is particularly noteworthy among uncommon variants where we observe up to 5.1% information gain with the difference being highly significant (Wilcoxon signed rank test P-value < 0.0001). Our work is the first that considers various sensible approaches for imputation in admixed populations and presents a comprehensive comparison. PMID:23074066

  2. Global plate motion frames: Toward a unified model

    NASA Astrophysics Data System (ADS)

    Torsvik, Trond H.; Müller, R. Dietmar; van der Voo, Rob; Steinberger, Bernhard; Gaina, Carmen

    2008-09-01

    Plate tectonics constitutes our primary framework for understanding how the Earth works over geological timescales. High-resolution mapping of relative plate motions based on marine geophysical data has followed the discovery of geomagnetic reversals, mid-ocean ridges, transform faults, and seafloor spreading, cementing the plate tectonic paradigm. However, so-called "absolute plate motions," describing how the fragments of the outer shell of the Earth have moved relative to a reference system such as the Earth's mantle, are still poorly understood. Accurate absolute plate motion models are essential surface boundary conditions for mantle convection models as well as for understanding past ocean circulation and climate as continent-ocean distributions change with time. A fundamental problem with deciphering absolute plate motions is that the Earth's rotation axis and the averaged magnetic dipole axis are not necessarily fixed to the mantle reference system. Absolute plate motion models based on volcanic hot spot tracks are largely confined to the last 130 Ma and ideally would require knowledge about the motions within the convecting mantle. In contrast, models based on paleomagnetic data reflect plate motion relative to the magnetic dipole axis for most of Earth's history but cannot provide paleolongitudes because of the axial symmetry of the Earth's magnetic dipole field. We analyze four different reference frames (paleomagnetic, African fixed hot spot, African moving hot spot, and global moving hot spot), discuss their uncertainties, and develop a unifying approach for connecting a hot spot track system and a paleomagnetic absolute plate reference system into a "hybrid" model for the time period from the assembly of Pangea (˜320 Ma) to the present. For the last 100 Ma we use a moving hot spot reference frame that takes mantle convection into account, and we connect this to a pre-100 Ma global paleomagnetic frame adjusted 5° in longitude to smooth the reference frame transition. Using plate driving force arguments and the mapping of reconstructed large igneous provinces to core-mantle boundary topography, we argue that continental paleolongitudes can be constrained with reasonable confidence.

  3. Solid-phase cadmium speciation in soil using L3-edge XANES spectroscopy with partial least-squares regression.

    PubMed

    Siebers, Nina; Kruse, Jens; Eckhardt, Kai-Uwe; Hu, Yongfeng; Leinweber, Peter

    2012-07-01

    Cadmium (Cd) has a high toxicity and resolving its speciation in soil is challenging but essential for estimating the environmental risk. In this study partial least-square (PLS) regression was tested for its capability to deconvolute Cd L(3)-edge X-ray absorption near-edge structure (XANES) spectra of multi-compound mixtures. For this, a library of Cd reference compound spectra and a spectrum of a soil sample were acquired. A good coefficient of determination (R(2)) of Cd compounds in mixtures was obtained for the PLS model using binary and ternary mixtures of various Cd reference compounds proving the validity of this approach. In order to describe complex systems like soil, multi-compound mixtures of a variety of Cd compounds must be included in the PLS model. The obtained PLS regression model was then applied to a highly Cd-contaminated soil revealing Cd(3)(PO(4))(2) (36.1%), Cd(NO(3))(2)·4H(2)O (24.5%), Cd(OH)(2) (21.7%), CdCO(3) (17.1%) and CdCl(2) (0.4%). These preliminary results proved that PLS regression is a promising approach for a direct determination of Cd speciation in the solid phase of a soil sample.

  4. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  5. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  6. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Approaches to surface complexation modeling of Uranium(VI) adsorption on aquifer sediments

    NASA Astrophysics Data System (ADS)

    Davis, James A.; Meece, David E.; Kohler, Matthias; Curtis, Gary P.

    2004-09-01

    Uranium(VI) adsorption onto aquifer sediments was studied in batch experiments as a function of pH and U(VI) and dissolved carbonate concentrations in artificial groundwater solutions. The sediments were collected from an alluvial aquifer at a location upgradient of contamination from a former uranium mill operation at Naturita, Colorado (USA). The ranges of aqueous chemical conditions used in the U(VI) adsorption experiments (pH 6.9 to 7.9; U(VI) concentration 2.5 · 10 -8 to 1 · 10 -5 M; partial pressure of carbon dioxide gas 0.05 to 6.8%) were based on the spatial variation in chemical conditions observed in 1999-2000 in the Naturita alluvial aquifer. The major minerals in the sediments were quartz, feldspars, and calcite, with minor amounts of magnetite and clay minerals. Quartz grains commonly exhibited coatings that were greater than 10 nm in thickness and composed of an illite-smectite clay with occluded ferrihydrite and goethite nanoparticles. Chemical extractions of quartz grains removed from the sediments were used to estimate the masses of iron and aluminum present in the coatings. Various surface complexation modeling approaches were compared in terms of the ability to describe the U(VI) experimental data and the data requirements for model application to the sediments. Published models for U(VI) adsorption on reference minerals were applied to predict U(VI) adsorption based on assumptions about the sediment surface composition and physical properties (e.g., surface area and electrical double layer). Predictions from these models were highly variable, with results overpredicting or underpredicting the experimental data, depending on the assumptions used to apply the model. Although the models for reference minerals are supported by detailed experimental studies (and in ideal cases, surface spectroscopy), the results suggest that errors are caused in applying the models directly to the sediments by uncertain knowledge of: 1) the proportion and types of surface functional groups available for adsorption in the surface coatings; 2) the electric field at the mineral-water interface; and 3) surface reactions of major ions in the aqueous phase, such as Ca 2+, Mg 2+, HCO 3-, SO 42-, H 4SiO 4, and organic acids. In contrast, a semi-empirical surface complexation modeling approach can be used to describe the U(VI) experimental data more precisely as a function of aqueous chemical conditions. This approach is useful as a tool to describe the variation in U(VI) retardation as a function of chemical conditions in field-scale reactive transport simulations, and the approach can be used at other field sites. However, the semi-empirical approach is limited by the site-specific nature of the model parameters.

  8. Approaches to surface complexation modeling of Uranium(VI) adsorption on aquifer sediments

    USGS Publications Warehouse

    Davis, J.A.; Meece, D.E.; Kohler, M.; Curtis, G.P.

    2004-01-01

    Uranium(VI) adsorption onto aquifer sediments was studied in batch experiments as a function of pH and U(VI) and dissolved carbonate concentrations in artificial groundwater solutions. The sediments were collected from an alluvial aquifer at a location upgradient of contamination from a former uranium mill operation at Naturita, Colorado (USA). The ranges of aqueous chemical conditions used in the U(VI) adsorption experiments (pH 6.9 to 7.9; U(VI) concentration 2.5 ?? 10-8 to 1 ?? 10-5 M; partial pressure of carbon dioxide gas 0.05 to 6.8%) were based on the spatial variation in chemical conditions observed in 1999-2000 in the Naturita alluvial aquifer. The major minerals in the sediments were quartz, feldspars, and calcite, with minor amounts of magnetite and clay minerals. Quartz grains commonly exhibited coatings that were greater than 10 nm in thickness and composed of an illite-smectite clay with occluded ferrihydrite and goethite nanoparticles. Chemical extractions of quartz grains removed from the sediments were used to estimate the masses of iron and aluminum present in the coatings. Various surface complexation modeling approaches were compared in terms of the ability to describe the U(VI) experimental data and the data requirements for model application to the sediments. Published models for U(VI) adsorption on reference minerals were applied to predict U(VI) adsorption based on assumptions about the sediment surface composition and physical properties (e.g., surface area and electrical double layer). Predictions from these models were highly variable, with results overpredicting or underpredicting the experimental data, depending on the assumptions used to apply the model. Although the models for reference minerals are supported by detailed experimental studies (and in ideal cases, surface spectroscopy), the results suggest that errors are caused in applying the models directly to the sediments by uncertain knowledge of: 1) the proportion and types of surface functional groups available for adsorption in the surface coatings; 2) the electric field at the mineral-water interface; and 3) surface reactions of major ions in the aqueous phase, such as Ca2+, Mg2+, HCO3-, SO42-, H4SiO4, and organic acids. In contrast, a semi-empirical surface complexation modeling approach can be used to describe the U(VI) experimental data more precisely as a function of aqueous chemical conditions. This approach is useful as a tool to describe the variation in U(VI) retardation as a function of chemical conditions in field-scale reactive transport simulations, and the approach can be used at other field sites. However, the semi-empirical approach is limited by the site-specific nature of the model parameters. ?? 2004 Elsevier Ltd.

  9. Modeling Non-Linear Material Properties in Composite Materials

    DTIC Science & Technology

    2016-06-28

    2 Figure 2: Implementation of multiscale enrichment into FEA ...corresponding to the mth degree of freedom, and is the associated degree of freedom. For FEA , the standard shape function, NI, which can be...varies depending on the governing method. In this presentation we will focus in the FEA approach. Reference [4] gives complete details on the

  10. Renormalization Group (RG) in Turbulence: Historical and Comparative Perspective

    NASA Technical Reports Server (NTRS)

    Zhou, Ye; McComb, W. David; Vahala, George

    1997-01-01

    The term renormalization and renormalization group are explained by reference to various physical systems. The extension of renormalization group to turbulence is then discussed; first as a comprehensive review and second concentrating on the technical details of a few selected approaches. We conclude with a discussion of the relevance and application of renormalization group to turbulence modelling.

  11. 12 CFR Appendix C to Part 3 - Capital Adequacy Guidelines for Banks: Internal-Ratings-Based and Advanced Measurement Approaches

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Excluded mortgage exposure means any one- to four-family residential pre-sold construction loan for a... model development. In this context, backtesting is one form of out-of-sample testing. Bank holding... one party (the protection purchaser) to transfer the credit risk of one or more exposures (reference...

  12. The Development of SCORM-Conformant Learning Content Based on the Learning Cycle Using Participatory Design

    ERIC Educational Resources Information Center

    Su, C. Y.; Chiu, C. H.; Wang, T. I.

    2010-01-01

    This study incorporates the 5E learning cycle strategy to design and develop Sharable Content Object Reference Model-conformant materials for elementary science education. The 5E learning cycle that supports the constructivist approach has been widely applied in science education. The strategy consists of five phases: engagement, exploration,…

  13. A Machine Learning Approach to Measurement of Text Readability for EFL Learners Using Various Linguistic Features

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko; Isahara, Hitoshi

    2011-01-01

    The present paper introduces and evaluates a readability measurement method designed for learners of EFL (English as a foreign language). The proposed readability measurement method (a regression model) estimates the text readability based on linguistic features, such as lexical, syntactic and discourse features. Text readability refers to the…

  14. Evaluating the All-Ages Lead Model Using SiteSpecific Data: Approaches and Challenges

    EPA Science Inventory

    Lead (Pb) exposure continues to be a problem in the United States. Even after years of progress in reducing environmental levels, CDC estimates at least 500,000 U.S. children ages 1-5 years have blood Pb levels (BLL) above the CDC reference level of 5 µg/dL. Childhood Pb ex...

  15. Dimensional Comparisons: An Experimental Approach to the Internal/External Frame of Reference Model.

    ERIC Educational Resources Information Center

    Moller, Jens; Koller, Olaf

    2001-01-01

    Three experimental studies investigated psychological processes underlying the effects of achievement in one domain and on self-perceived competence in another. In Study 1, high achievement in one domain led to lower self-perceived competence in the other. Study 2 showed inverse effects on self-perceived competence based on achievement feedback.…

  16. Adoption by Policy Makers of Knowledge from Educational Research: An Alternative Perspective

    ERIC Educational Resources Information Center

    Brown, Chris

    2012-01-01

    The phrase knowledge adoption refers to the ways in which policymakers take up and use evidence. Whilst frameworks and models have been put forward to explain knowledge adoption activity, this paper argues that current approaches are flawed and do not address the complexities affecting the successful realisation of knowledge-adoption efforts.…

  17. Dimensionality of the Latent Structure and Item Selection via Latent Class Multidimensional IRT Models

    ERIC Educational Resources Information Center

    Bartolucci, F.; Montanari, G. E.; Pandolfi, S.

    2012-01-01

    With reference to a questionnaire aimed at assessing the performance of Italian nursing homes on the basis of the health conditions of their patients, we investigate two relevant issues: dimensionality of the latent structure and discriminating power of the items composing the questionnaire. The approach is based on a multidimensional item…

  18. Parsing in a Dynamical System: An Attractor-Based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing.

    ERIC Educational Resources Information Center

    Tabor, Whitney; And Others

    1997-01-01

    Proposes a dynamical systems approach to parsing in which syntactic hypotheses are associated with attractors in a metric space. The experiments discussed documented various contingent frequency effects that cut across traditional linguistic grains, each of which was predicted by the dynamical systems model. (47 references) (Author/CK)

  19. Family Support Center Village: A Unique Approach for Low-Income Single Women with Children

    ERIC Educational Resources Information Center

    Graber, Helen V.; Wolfe, Jayne L.

    2004-01-01

    The Family Support Center, recognizing the need for single women with children to maintain stability, has developed a program referred to as the Family Support Center Village, which incorporates a service enriched co-housing model. The "Village" will be the catalyst for these mothers' self-sufficiency and will provide opportunities to develop…

  20. Reference-free determination of tissue absorption coefficient by modulation transfer function characterization in spatial frequency domain.

    PubMed

    Chen, Weiting; Zhao, Huijuan; Li, Tongxin; Yan, Panpan; Zhao, Kuanxin; Qi, Caixia; Gao, Feng

    2017-08-08

    Spatial frequency domain (SFD) measurement allows rapid and non-contact wide-field imaging of the tissue optical properties, thus has become a potential tool for assessing physiological parameters and therapeutic responses during photodynamic therapy of skin diseases. The conventional SFD measurement requires a reference measurement within the same experimental scenario as that for a test one to calibrate mismatch between the real measurements and the model predictions. Due to the individual physical and geometrical differences among different tissues, organs and patients, an ideal reference measurement might be unavailable in clinical trials. To address this problem, we present a reference-free SFD determination of absorption coefficient that is based on the modulation transfer function (MTF) characterization. Instead of the absolute amplitude that is used in the conventional SFD approaches, we herein employ the MTF to characterize the propagation of the modulated lights in tissues. With such a dimensionless relative quantity, the measurements can be naturally corresponded to the model predictions without calibrating the illumination intensity. By constructing a three-dimensional database that portrays the MTF as a function of the optical properties (both the absorption coefficient μ a and the reduced scattering coefficient [Formula: see text]) and the spatial frequency, a look-up table approach or a least-square curve-fitting method is readily applied to recover the absorption coefficient from a single frequency or multiple frequencies, respectively. Simulation studies have verified the feasibility of the proposed reference-free method and evaluated its accuracy in the absorption recovery. Experimental validations have been performed on homogeneous tissue-mimicking phantoms with μ a ranging from 0.01 to 0.07 mm -1 and [Formula: see text] = 1.0 or 2.0 mm -1 . The results have shown maximum errors of 4.86 and 7% for [Formula: see text] = 1.0 mm -1 and [Formula: see text] = 2.0 mm -1 , respectively. We have also presented quantitative ex vivo imaging of human lung cancer in a subcutaneous xenograft mouse model for further validation, and observed high absorption contrast in the tumor region. The proposed method can be applied to the rapid and accurate determination of the absorption coefficient, and better yet, in a reference-free way. We believe this reference-free strategy will facilitate the clinical translation of the SFD measurement to achieve enhanced intraoperative hemodynamic monitoring and personalized treatment planning in photodynamic therapy.

  1. IIR filtering based adaptive active vibration control methodology with online secondary path modeling using PZT actuators

    NASA Astrophysics Data System (ADS)

    Boz, Utku; Basdogan, Ipek

    2015-12-01

    Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.

  2. Moving to a Modernized Height Reference System in Canada: Rationale, Status and Plans

    NASA Astrophysics Data System (ADS)

    Veronneau, M.; Huang, J.

    2007-05-01

    A modern society depends on a common coordinate reference system through which geospatial information can be interrelated and exploited reliably. For height measurements this requires the ability to measure mean sea level elevations easily, accurately, and at the lowest possible cost. The current national reference system for elevations, the Canadian Geodetic Vertical Datum of 1928 (CGVD28), offers only partial geographic coverage of the Canadian territory and is affected by inaccuracies that are becoming more apparent as users move to space- based technologies such as GPS. Furthermore, the maintenance and expansion of the national vertical network using spirit-levelling, a costly, time consuming and labour intensive proposition, has only been minimally funded over the past decade. It is now generally accepted that the most sustainable alternative for the realization of a national vertical datum is a gravimetric geoid model. This approach defines the datum in relation to an ellipsoid, making it compatible with space-based technologies for positioning. While simplifying access to heights above mean sea level all across the Canadian territory, this approach imposes additional demands on the quality of the geoid model. These are being met by recent and upcoming space gravimetry missions that have and will be measuring the Earth`s gravity field with increasing and unprecedented accuracy. To maintain compatibility with the CGVD28 datum materialized at benchmarks, the current first-order levelling can be readjusted by constraining geoid heights at selected stations of the Canadian Base Network. The new reference would change CGVD28 heights of benchmarks by up to 1 m across Canada. However, local height differences between benchmarks would maintain a relative precision of a few cm or better. CGVD28 will co-exist with the new height reference as long as it will be required, but it will undoubtedly disappear as benchmarks are destroyed over time. The adoption of GNSS technologies for positioning should naturally move users to the new height reference and offer the possibility of transferring heights over longer distances, within the precision of the geoid model. This transition will also reduce user dependency on a dense network of benchmarks and offer the possibility for geodetic agencies to provide the reference frame with a reduced number of 3D control points. While the rationale for moving to a modernized height system is easily understood, the acceptance of the new system by users will only occur gradually as they adopt new technologies and procedures to access the height reference. A stakeholder consultation indicates user readiness and an implementation plan is starting to unfold. This presentation will look at the current state of the geoid model and control networks that will support the modernized height system. Results of the consultation and the recommendations regarding the roles and responsibilities of the various stakeholders involved in implementing the transition will also be reported.

  3. Computational model of precision grip in Parkinson's disease: a utility based approach

    PubMed Central

    Gupta, Ankur; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa

    2013-01-01

    We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373

  4. Single-particle cryo-EM using alignment by classification (ABC): the structure of Lumbricus terrestris haemoglobin.

    PubMed

    Afanasyev, Pavel; Seer-Linnemayr, Charlotte; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V; Pannu, Navraj S; Schatz, Michael; van Heel, Marin

    2017-09-01

    Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the 'Einstein from random noise' problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous ('four-dimensional') cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, 'random-startup' three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external 'starting models'. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive 'ABC-4D' pipeline is based on the two-dimensional reference-free 'alignment by classification' (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure.

  5. Health care managers' views on and approaches to implementing models for improving care processes.

    PubMed

    Andreasson, Jörgen; Eriksson, Andrea; Dellve, Lotta

    2016-03-01

    To develop a deeper understanding of health-care managers' views on and approaches to the implementation of models for improving care processes. In health care, there are difficulties in implementing models for improving care processes that have been decided on by upper management. Leadership approaches to this implementation can affect the outcome. In-depth interviews with first- and second-line managers in Swedish hospitals were conducted and analysed using grounded theory. 'Coaching for participation' emerged as a central theme for managers in handling top-down initiated process development. The vertical approach in this coaching addresses how managers attempt to sustain unit integrity through adapting and translating orders from top management. The horizontal approach in the coaching refers to managers' strategies for motivating and engaging their employees in implementation work. Implementation models for improving care processes require a coaching leadership built on close manager-employee interaction, mindfulness regarding the pace of change at the unit level, managers with the competence to share responsibility with their teams and engaged employees with the competence to share responsibility for improving the care processes, and organisational structures that support process-oriented work. Implications for nursing management are the importance of giving nurse managers knowledge of change management. © 2015 John Wiley & Sons Ltd.

  6. An approach to the analysis of health care needs and resources.

    PubMed

    Stone, D H

    1980-09-01

    A semi-quantitative method of analysing the relationship between health care resources and need is described. It utilises the Donabedian model which expresses needs and resources in equivalent units in order to estimate the ratio of resources to need. The optimum resource/need ratio is regarded as that pertaining to the reference population; deviation from this optimum ratio in the subunits of the reference population is interpreted as a manifestation of inequitable resource distribution. An example is presented of the application of the method to the Greater Glasgow Health Board and its five constituent districts for the years 1974-77. It is argued that this method might, without undermining the principle of geographical equity, meet some of the objections to the more rigid 'formula' approach in the report of the Resource Allocation Working Party (RAWP) and in the Scottish Health Authorities Revenue Equalisation (SHARE) report.

  7. Self-reference and predictive, normative and prescriptive approaches in applications of systems thinking in social sciences—(Survey)

    NASA Astrophysics Data System (ADS)

    Mesjasz, Czesław

    2000-05-01

    Cybernetics, systems thinking or systems theory, have been viewed as instruments of enhancing predictive, normative and prescriptive capabilities of the social sciences, beginning from microscale-management and ending with various reference to the global system. Descriptions, explanations and predictions achieved thanks to various systems ideas were also viewed as supportive for potential governance of social phenomena. The main aim of the paper is to examine what could be the possible applications of modern systems thinking in predictive, normative and prescriptive approaches in modern social sciences, beginning from management theory and ending with global studies. Attention is paid not only to "classical" mathematical systems models but also to the role of predictive, normative and prescriptive interpretations of analogies and metaphors associated with application of the classical ("first order cybernetics") and modern ("second order cybernetics", "complexity theory") systems thinking in social sciences.

  8. Query Health: standards-based, cross-platform population health surveillance

    PubMed Central

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371

  9. Query Health: standards-based, cross-platform population health surveillance.

    PubMed

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. A Stigmergy Approach for Open Source Software Developer Community Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Beaver, Justin M; Potok, Thomas E

    2009-01-01

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agentmore » behaviors selection probability.« less

  11. Vineyard water status assessment using on-the-go thermal imaging and machine learning.

    PubMed

    Gutiérrez, Salvador; Diago, María P; Fernández-Novales, Juan; Tardaguila, Javier

    2018-01-01

    The high impact of irrigation in crop quality and yield in grapevine makes the development of plant water status monitoring systems an essential issue in the context of sustainable viticulture. This study presents an on-the-go approach for the estimation of vineyard water status using thermal imaging and machine learning. The experiments were conducted during seven different weeks from July to September in season 2016. A thermal camera was embedded on an all-terrain vehicle moving at 5 km/h to take on-the-go thermal images of the vineyard canopy at 1.2 m of distance and 1.0 m from the ground. The two sides of the canopy were measured for the development of side-specific and global models. Stem water potential was acquired and used as reference method. Additionally, reference temperatures Tdry and Twet were determined for the calculation of two thermal indices: the crop water stress index (CWSI) and the Jones index (Ig). Prediction models were built with and without considering the reference temperatures as input of the training algorithms. When using the reference temperatures, the best models casted determination coefficients R2 of 0.61 and 0.58 for cross validation and prediction (RMSE values of 0.190 MPa and 0.204 MPa), respectively. Nevertheless, when the reference temperatures were not considered in the training of the models, their performance statistics responded in the same way, returning R2 values up to 0.62 and 0.65 for cross validation and prediction (RMSE values of 0.190 MPa and 0.184 MPa), respectively. The outcomes provided by the machine learning algorithms support the use of thermal imaging for fast, reliable estimation of a vineyard water status, even suppressing the necessity of supervised acquisition of reference temperatures. The new developed on-the-go method can be very useful in the grape and wine industry for assessing and mapping vineyard water status.

  12. Vineyard water status assessment using on-the-go thermal imaging and machine learning

    PubMed Central

    Gutiérrez, Salvador; Diago, María P.; Fernández-Novales, Juan

    2018-01-01

    The high impact of irrigation in crop quality and yield in grapevine makes the development of plant water status monitoring systems an essential issue in the context of sustainable viticulture. This study presents an on-the-go approach for the estimation of vineyard water status using thermal imaging and machine learning. The experiments were conducted during seven different weeks from July to September in season 2016. A thermal camera was embedded on an all-terrain vehicle moving at 5 km/h to take on-the-go thermal images of the vineyard canopy at 1.2 m of distance and 1.0 m from the ground. The two sides of the canopy were measured for the development of side-specific and global models. Stem water potential was acquired and used as reference method. Additionally, reference temperatures Tdry and Twet were determined for the calculation of two thermal indices: the crop water stress index (CWSI) and the Jones index (Ig). Prediction models were built with and without considering the reference temperatures as input of the training algorithms. When using the reference temperatures, the best models casted determination coefficients R2 of 0.61 and 0.58 for cross validation and prediction (RMSE values of 0.190 MPa and 0.204 MPa), respectively. Nevertheless, when the reference temperatures were not considered in the training of the models, their performance statistics responded in the same way, returning R2 values up to 0.62 and 0.65 for cross validation and prediction (RMSE values of 0.190 MPa and 0.184 MPa), respectively. The outcomes provided by the machine learning algorithms support the use of thermal imaging for fast, reliable estimation of a vineyard water status, even suppressing the necessity of supervised acquisition of reference temperatures. The new developed on-the-go method can be very useful in the grape and wine industry for assessing and mapping vineyard water status. PMID:29389982

  13. Electroencephalography (EEG) forward modeling via H(div) finite element sources with focal interpolation.

    PubMed

    Pursiainen, S; Vorwerk, J; Wolters, C H

    2016-12-21

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.

  14. The Importance of Neighborhood Scheme Selection in Agent-based Tumor Growth Modeling.

    PubMed

    Tzedakis, Georgios; Tzamali, Eleftheria; Marias, Kostas; Sakkalis, Vangelis

    2015-01-01

    Modeling tumor growth has proven a very challenging problem, mainly due to the fact that tumors are highly complex systems that involve dynamic interactions spanning multiple scales both in time and space. The desire to describe interactions in various scales has given rise to modeling approaches that use both continuous and discrete variables, known as hybrid approaches. This work refers to a hybrid model on a 2D square lattice focusing on cell movement dynamics as they play an important role in tumor morphology, invasion and metastasis and are considered as indicators for the stage of malignancy used for early prognosis and effective treatment. Considering various distributions of the microenvironment, we explore how Neumann vs. Moore neighborhood schemes affects tumor growth and morphology. The results indicate that the importance of neighborhood selection is critical under specific conditions that include i) increased hapto/chemo-tactic coefficient, ii) a rugged microenvironment and iii) ECM degradation.

  15. On Multifunctional Collaborative Methods in Engineering Science

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2001-01-01

    Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.

  16. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  17. Accuracy assessment for a multi-parameter optical calliper in on line automotive applications

    NASA Astrophysics Data System (ADS)

    D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.

    2017-08-01

    In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.

  18. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  19. Dynamic Fuzzy Model Development for a Drum-type Boiler-turbine Plant Through GK Clustering

    NASA Astrophysics Data System (ADS)

    Habbi, Ahcène; Zelmat, Mimoun

    2008-10-01

    This paper discusses a TS fuzzy model identification method for an industrial drum-type boiler plant using the GK fuzzy clustering approach. The fuzzy model is constructed from a set of input-output data that covers a wide operating range of the physical plant. The reference data is generated using a complex first-principle-based mathematical model that describes the key dynamical properties of the boiler-turbine dynamics. The proposed fuzzy model is derived by means of fuzzy clustering method with particular attention on structure flexibility and model interpretability issues. This may provide a basement of a new way to design model based control and diagnosis mechanisms for the complex nonlinear plant.

  20. Technosocial Predictive Analytics in Support of Naturalistic Decision Making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Cowell, Andrew J.; Malone, Elizabeth L.

    2009-06-23

    A main challenge we face in fostering sustainable growth is to anticipate outcomes through predictive and proactive across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities, influence outcomes and counter adversities. The goal of this paper is to present new methods for anticipatory analytical thinking which address this challenge through the development of a multi-perspective approach to predictive modeling as a core to a creative decision making process. This approach is uniquely multidisciplinary in that it strives to create decision advantage through the integration of human and physical models, and leverages knowledgemore » management and visual analytics to support creative thinking by facilitating the achievement of interoperable knowledge inputs and enhancing the user’s cognitive access. We describe a prototype system which implements this approach and exemplify its functionality with reference to a use case in which predictive modeling is paired with analytic gaming to support collaborative decision-making in the domain of agricultural land management.« less

  1. Solar Plus: A Holistic Approach to Distributed Solar PV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OShaughnessy, Eric J.; Ardani, Kristen B.; Cutler, Dylan S.

    Solar 'plus' refers to an emerging approach to distributed solar photovoltaic (PV) deployment that uses energy storage and controllable devices to optimize customer economics. The solar plus approach increases customer system value through technologies such as electric batteries, smart domestic water heaters, smart air-conditioner (AC) units, and electric vehicles We use an NREL optimization model to explore the customer-side economics of solar plus under various utility rate structures and net metering rates. We explore optimal solar plus applications in five case studies with different net metering rates and rate structures. The model deploys different configurations of PV, batteries, smart domesticmore » water heaters, and smart AC units in response to different rate structures and customer load profiles. The results indicate that solar plus improves the customer economics of PV and may mitigate some of the negative impacts of evolving rate structures on PV economics. Solar plus may become an increasingly viable model for optimizing PV customer economics in an evolving rate environment.« less

  2. Exploring the Physics of Unstable Nuclei

    NASA Astrophysics Data System (ADS)

    Volya, Alexander

    In this presentation the Continuum Shell Model (CSM) approach is advertised as a powerful theoretical tool for studying physics of unstable nuclei. The approach is illustrated using 17O as an example, which is followed by a brief presentation of the general CSM formalism. The successes of the CSM are highlighted and references are provided throughout the text. As an example, the CSM is applied perturbatively to 20O allowing one to explore the effects of continuum on positions of weakly bound states and low-lying resonances, as well as to discern some effects of threshold discontinuity.

  3. An Interactive Strategy for Solving Multi-Criteria Decision Making of Sustainable Land Revitalization Planning Problem

    NASA Astrophysics Data System (ADS)

    Mayasari, Ruth; Mawengkang, Herman; Gomar Purba, Ronal

    2018-02-01

    Land revitalization refers to comprehensive renovation of farmland, waterways, roads, forest or villages to improve the quality of plantation, raise the productivity of the plantation area and improve agricultural production conditions and the environment. The objective of sustainable land revitalization planning is to facilitate environmentally, socially, and economically viable land use. Therefore it is reasonable to use participatory approach to fullfil the plan. This paper addresses a multicriteria decision aid to model such planning problem, then we develop an interactive approach for solving the problem.

  4. Pulse fracture simulation in shale rock reservoirs: DEM and FEM-DEM approaches

    NASA Astrophysics Data System (ADS)

    González, José Manuel; Zárate, Francisco; Oñate, Eugenio

    2018-07-01

    In this paper we analyze the capabilities of two numerical techniques based on DEM and FEM-DEM approaches for the simulation of fracture in shale rock caused by a pulse of pressure. We have studied the evolution of fracture in several fracture scenarios related to the initial stress state in the soil or the pressure pulse peak. Fracture length and type of failure have been taken as reference for validating the models. The results obtained show a good approximation to FEM results from the literature.

  5. Integrated Assessment and the Relation Between Land-Use Change and Climate Change

    DOE R&D Accomplishments Database

    Dale, V. H.

    1994-10-07

    Integrated assessment is an approach that is useful in evaluating the consequences of global climate change. Understanding the consequences requires knowledge of the relationship between land-use change and climate change. Methodologies for assessing the contribution of land-use change to atmospheric CO{sub 2} concentrations are considered with reference to a particular case study area: south and southeast Asia. The use of models to evaluate the consequences of climate change on forests must also consider an assessment approach. Each of these points is discussed in the following four sections.

  6. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  7. Numerical prediction of kinetic model for enzymatic hydrolysis of cellulose using DAE-QMOM approach

    NASA Astrophysics Data System (ADS)

    Jamil, N. M.; Wang, Q.

    2016-06-01

    Bioethanol production from lignocellulosic biomass consists of three fundamental processes; pre-treatment, enzymatic hydrolysis, and fermentation. In enzymatic hydrolysis phase, the enzymes break the cellulose chains into sugar in the form of cellobiose or glucose. A currently proposed kinetic model for enzymatic hydrolysis of cellulose that uses population balance equation (PBE) mechanism was studied. The complexity of the model due to integrodifferential equations makes it difficult to find the analytical solution. Therefore, we solved the full model of PBE numerically by using DAE-QMOM approach. The computation was carried out using MATLAB software. The numerical results were compared to the asymptotic solution developed in the author's previous paper and the results of Griggs et al. Besides confirming the findings were consistent with those references, some significant characteristics were also captured. The PBE model for enzymatic hydrolysis process can be solved using DAE-QMOM method. Also, an improved understanding of the physical insights of the model was achieved.

  8. An Enhanced Engineering Perspective of Global Climate Systems and Statistical Formulation of Terrestrial CO2 Exchanges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Yuanshun; Baek, Seung H.; Garcia-Diza, Alberto

    2012-01-01

    This paper designs a comprehensive approach based on the engineering machine/system concept, to model, analyze, and assess the level of CO2 exchange between the atmosphere and terrestrial ecosystems, which is an important factor in understanding changes in global climate. The focus of this article is on spatial patterns and on the correlation between levels of CO2 fluxes and a variety of influencing factors in eco-environments. The engineering/machine concept used is a system protocol that includes the sequential activities of design, test, observe, and model. This concept is applied to explicitly include various influencing factors and interactions associated with CO2 fluxes.more » To formulate effective models of a large and complex climate system, this article introduces a modeling technique that will be referred to as Stochastic Filtering Analysis of Variance (SFANOVA). The CO2 flux data observed from some sites of AmeriFlux are used to illustrate and validate the analysis, prediction and globalization capabilities of the proposed engineering approach and the SF-ANOVA technology. The SF-ANOVA modeling approach was compared to stepwise regression, ridge regression, and neural networks. The comparison indicated that the proposed approach is a valid and effective tool with similar accuracy and less complexity than the other procedures.« less

  9. Torus Approach in Gravity Field Determination from Simulated GOCE Gravity Gradients

    NASA Astrophysics Data System (ADS)

    Liu, Huanling; Wen, Hanjiang; Xu, Xinyu; Zhu, Guangbin

    2016-08-01

    In Torus approach, observations are projected to the nominal orbits with constant radius and inclination, lumped coefficients provides a linear relationship between observations and spherical harmonic coefficients. Based on the relationship, two-dimensional FFT and block-diagonal least-squares adjustment are used to recover Earth's gravity field model. The Earth's gravity field model complete to degree and order 200 is recovered using simulated satellite gravity gradients on a torus grid, and the degree median error is smaller than 10-18, which shows the effectiveness of Torus approach. EGM2008 is employed as a reference model and the gravity field model is resolved using the simulated observations without noise given on GOCE orbits of 61 days. The error from reduction and interpolation can be mitigated by iterations. Due to polar gap, the precision of low-order coefficients is lower. Without considering these coefficients the maximum geoid degree error and cumulative error are 0.022mm and 0.099mm, respectively. The Earth's gravity field model is also recovered from simulated observations with white noise 5mE/Hz1/2, which is compared to that from direct method. In conclusion, it is demonstrated that Torus approach is a valid method for processing massive amount of GOCE gravity gradients.

  10. Open Pit Mine 3d Mapping by Tls and Digital Photogrammetry: 3d Model Update Thanks to a Slam Based Approach

    NASA Astrophysics Data System (ADS)

    Vassena, G.; Clerici, A.

    2018-05-01

    The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.

  11. Qualitative modelling for the Caeté Mangrove Estuary (North Brazil): a preliminary approach to an integrated eco-social analysis

    NASA Astrophysics Data System (ADS)

    Ortiz, Marco; Wolff, Matthias

    2004-10-01

    The sustainability of different integrated management regimes for the mangrove ecosystem of the Caeté Estuary (North Brazil) were assessed using a holistic theoretical framework. As a way to demonstrate that the behaviour and trajectory of complex whole systems are not epiphenomenal to the properties of the small parts, a set of conceptual models from more reductionistic to more holistic were enunciated. These models integrate the scientific information published until present for this mangrove ecosystem. The sustainability of different management scenarios (forestry and fishery) was assessed. Since the exploitation of mangrove trees is not allowed according Brazilian laws, the forestry was only included for simulation purposes. The model simulations revealed that sustainability predictions of reductionistic models should not be extrapolated into holistic approaches. Forestry and fishery activities seem to be sustainable only if they are self-damped. The exploitation of the two mangrove species Rhizophora mangle and Avicenia germinans does not appear to be sustainable, thus a rotation harvest is recommended. A similar conclusion holds for the exploitation of invertebrate species. Our results suggest that more studies should be focused on the estimation of maximum sustainable yield based on a multispecies approach. Any reference to holistic sustainability based on reductionistic approaches may distort our understanding of the natural complex ecosystems.

  12. A CASE STUDY OF THE REFERENCE CONDITION APPROACH TO NITROGEN MANAGEMENT IN ESTUARIES

    EPA Science Inventory

    One way to estimate estuarine response to changes in nitrogen loading in coastal systems is by using a reference approach. This talk details the application of paleoecological analysis and use of historical data to estimate reference loads of nitrogen to New Bedford Harbor (NBH),...

  13. Dynamical prediction of flu seasonality driven by ambient temperature: influenza vs. common cold

    NASA Astrophysics Data System (ADS)

    Postnikov, Eugene B.

    2016-01-01

    This work presents a comparative analysis of Influenzanet data for influenza itself and common cold in the Netherlands during the last 5 years, from the point of view of modelling by linearised SIRS equations parametrically driven by the ambient temperature. It is argued that this approach allows for the forecast of common cold, but not of influenza in a strict sense. The difference in their kinetic models is discussed with reference to the clinical background.

  14. Gerris Flow Solver: Implementation and Application

    DTIC Science & Technology

    2013-05-12

    2010), as well as tsunamis (Popinet 2011; 2012). The OMEGA model ( Bacon et al., 2000; Boybeyi et al., 2001) took a different approach to adaptivity...application of the model system to problems of interest. Cited References D. P. Bacon , N. N. Ahmad, et al. (2000), A dynamically adapting weather...Geophysical Union, Washington, DC, 1–16. Z. Boybeyi, N. N. Ahmad, D. P. Bacon , T. J. Dunn, M. S. Hall, P. C. S. Lee, R. A. Sarma, and T. R. Wait (2001

  15. Using a Hierarchical Approach to Model Regional Source Sink Dynamics for Neotropical Nearctic Songbirds to Inform Management Practices on Department of Defense Installations

    DTIC Science & Technology

    2017-03-20

    comparison with the more intensive demographic study . We found support for spatial variation in productivity at both location and station scales. At location...the larger intensive demographic monitoring study , we also fit a productivity model that included a covariate calculated for the 12 stations included...Reference herein to any specific commercial product , process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily

  16. Thermodynamical properties of liquid lanthanides-A variational approach

    NASA Astrophysics Data System (ADS)

    Patel, H. P.; Thakor, P. B.; Sonvane, Y. A.

    2015-06-01

    Thermodynamical properties like Entropy (S), Internal energy (E) and Helmholtz free energy (F) of liquid lanthanides using a variation principle based on the Gibbs-Bogoliubuv (GB) inequality with Percus Yevick hard sphere reference system have been reported in the present investigation. To describe electron-ion interaction we have used our newly constructed parameter free model potential along with Sarkar et al. local field correction function. Lastly, we conclude that our newly constructed model potential is capable to explain the thermodynamical properties of liquid lanthanides.

  17. Thermodynamical properties of liquid lanthanides-A variational approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, H. P.; Department of Applied Physics, S. V. National Institute of Technology, Surat 395 007, Gujarat; Thakor, P. B., E-mail: pbthakor@rediffmail.com

    2015-06-24

    Thermodynamical properties like Entropy (S), Internal energy (E) and Helmholtz free energy (F) of liquid lanthanides using a variation principle based on the Gibbs-Bogoliubuv (GB) inequality with Percus Yevick hard sphere reference system have been reported in the present investigation. To describe electron-ion interaction we have used our newly constructed parameter free model potential along with Sarkar et al. local field correction function. Lastly, we conclude that our newly constructed model potential is capable to explain the thermodynamical properties of liquid lanthanides.

  18. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  19. A new hybrid approach for MHC genotyping: high-throughput NGS and long read MinION nanopore sequencing, with application to the non-model vertebrate Alpine chamois (Rupicapra rupicapra).

    PubMed

    Fuselli, S; Baptista, R P; Panziera, A; Magi, A; Guglielmi, S; Tonin, R; Benazzo, A; Bauzer, L G; Mazzoni, C J; Bertorelle, G

    2018-03-24

    The major histocompatibility complex (MHC) acts as an interface between the immune system and infectious diseases. Accurate characterization and genotyping of the extremely variable MHC loci are challenging especially without a reference sequence. We designed a combination of long-range PCR, Illumina short-reads, and Oxford Nanopore MinION long-reads approaches to capture the genetic variation of the MHC II DRB locus in an Italian population of the Alpine chamois (Rupicapra rupicapra). We utilized long-range PCR to generate a 9 Kb fragment of the DRB locus. Amplicons from six different individuals were fragmented, tagged, and simultaneously sequenced with Illumina MiSeq. One of these amplicons was sequenced with the MinION device, which produced long reads covering the entire amplified fragment. A pipeline that combines short and long reads resolved several short tandem repeats and homopolymers and produced a de novo reference, which was then used to map and genotype the short reads from all individuals. The assembled DRB locus showed a high level of polymorphism and the presence of a recombination breakpoint. Our results suggest that an amplicon-based NGS approach coupled with single-molecule MinION nanopore sequencing can efficiently achieve both the assembly and the genotyping of complex genomic regions in multiple individuals in the absence of a reference sequence.

  20. Robust tracking of a virtual electrode on a coronary sinus catheter for atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Chen, Terrence; Strobel, Norbert; Comaniciu, Dorin

    2012-02-01

    Catheter tracking in X-ray fluoroscopic images has become more important in interventional applications for atrial fibrillation (AF) ablation procedures. It provides real-time guidance for the physicians and can be used as reference for motion compensation applications. In this paper, we propose a novel approach to track a virtual electrode (VE), which is a non-existing electrode on the coronary sinus (CS) catheter at a more proximal location than any real electrodes. Successful tracking of the VE can provide more accurate motion information than tracking of real electrodes. To achieve VE tracking, we first model the CS catheter as a set of electrodes which are detected by our previously published learning-based approach.1 The tracked electrodes are then used to generate the hypotheses for tracking the VE. Model-based hypotheses are fused and evaluated by a Bayesian framework. Evaluation has been conducted on a database of clinical AF ablation data including challenging scenarios such as low signal-to-noise ratio (SNR), occlusion and nonrigid deformation. Our approach obtains 0.54mm median error and 90% of evaluated data have errors less than 1.67mm. The speed of our tracking algorithm reaches 6 frames-per-second on most data. Our study on motion compensation shows that using the VE as reference provides a good point to detect non-physiological catheter motion during the AF ablation procedures.2

Top