Sample records for model tsam compared

  1. Projecting Future Scheduled Airline Demand, Schedules and NGATS Benefits Using TSAM

    NASA Technical Reports Server (NTRS)

    Dollyhigh, Samuel; Smith, Jeremy; Viken, Jeff; Trani, Antonio; Baik, Hojong; Hinze, Nickolas; Ashiabor, Senanu

    2006-01-01

    The Transportation Systems Analysis Model (TSAM) developed by Virginia Tech s Air Transportation Systems Lab and NASA Langley can provide detailed analysis of the effects on the demand for air travel of a full range of NASA and FAA aviation projects. TSAM has been used to project the passenger demand for very light jet (VLJ) air taxi service, scheduled airline demand growth and future schedules, Next Generation Air Transportation System (NGATS) benefits, and future passenger revenues for the Airport and Airway Trust Fund. TSAM can project the resulting demand when new vehicles and/or technology is inserted into the long distance (100 or more miles one-way) transportation system, as well as, changes in demand as a result of fare yield increases or decreases, airport transit times, scheduled flight times, ticket taxes, reductions or increases in flight delays, and so on. TSAM models all long distance travel in the contiguous U.S. and determines the mode choice of the traveler based on detailed trip costs, travel time, schedule frequency, purpose of the trip (business or non-business), and household income level of the traveler. Demand is modeled at the county level, with an airport choice module providing up to three airports as part of the mode choice. Future enplanements at airports can be projected for different scenarios. A Fratar algorithm and a schedule generator are applied to generate future flight schedules. This paper presents the application of TSAM to modeling future scheduled air passenger demand and resulting airline schedules, the impact of NGATS goals and objectives on passenger demand, along with projections for passenger fee receipts for several scenarios for the FAA Airport and Airway Trust Fund.

  2. NAS Demand Predictions, Transportation Systems Analysis Model (TSAM) Compared with Other Forecasts

    NASA Technical Reports Server (NTRS)

    Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu

    2006-01-01

    The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These resulting demand scenarios can be incorporated into future flight schedules, therefore providing a quantifiable demand for flights in the NAS for a range of futures. In addition, new future airline business scenarios are investigated that illustrate when direct flights can replace connecting flights and larger aircraft can be substituted, only when justified by demand.

  3. Utilizing Traveler Demand Modeling to Predict Future Commercial Flight Schedules in the NAS

    NASA Technical Reports Server (NTRS)

    Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu

    2006-01-01

    The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These resulting demand scenarios can be incorporated into future flight schedules, therefore providing a quantifiable demand for flights in the NAS for a range of futures. In addition, new future airline business scenarios are investigated that illustrate when direct flights can replace connecting flights and larger aircraft can be substituted, only when justified by demand.

  4. CRISPR/Cas9 cleavage efficiency regression through boosting algorithms and Markov sequence profiling.

    PubMed

    Peng, Hui; Zheng, Yi; Blumenstein, Michael; Tao, Dacheng; Li, Jinyan

    2018-04-16

    CRISPR/Cas9 system is a widely used genome editing tool. A prediction problem of great interests for this system is: how to select optimal single guide RNAs (sgRNAs) such that its cleavage efficiency is high meanwhile the off-target effect is low. This work proposed a two-step averaging method (TSAM) for the regression of cleavage efficiencies of a set of sgRNAs by averaging the predicted efficiency scores of a boosting algorithm and those by a support vector machine (SVM).We also proposed to use profiled Markov properties as novel features to capture the global characteristics of sgRNAs. These new features are combined with the outstanding features ranked by the boosting algorithm for the training of the SVM regressor. TSAM improved the mean Spearman correlation coefficiencies comparing with the state-of-the-art performance on benchmark datasets containing thousands of human, mouse and zebrafish sgRNAs. Our method can be also converted to make binary distinctions between efficient and inefficient sgRNAs with superior performance to the existing methods. The analysis reveals that highly efficient sgRNAs have lower melting temperature at the middle of the spacer, cut at 5'-end closer parts of the genome and contain more 'A' but less 'G' comparing with inefficient ones. Comprehensive further analysis also demonstrates that our tool can predict an sgRNA's cutting efficiency with consistently good performance no matter it is expressed from an U6 promoter in cells or from a T7 promoter in vitro. Online tool is available at http://www.aai-bioinfo.com/CRISPR/. Python and Matlab source codes are freely available at https://github.com/penn-hui/TSAM. Jinyan.Li@uts.edu.au. Supplementary data are available at Bioinformatics online.

  5. Identification and Characterization of Three Orchid MADS-Box Genes of the AP1/AGL9 Subfamily during Floral Transition1

    PubMed Central

    Yu, Hao; Goh, Chong Jin

    2000-01-01

    Gene expressions associated with in vitro floral transition in an orchid hybrid (Dendrobium grex Madame Thong-In) were investigated by differential display. One clone, orchid transitional growth related gene 7 (otg7), encoding a new MADS-box gene, was identified to be specifically expressed in the transitional shoot apical meristem (TSAM). Using this clone as a probe, three orchid MADS-box genes, DOMADS1, DOMADS2, and DOMADS3, were subsequently isolated from the TSAM cDNA library. Phylogenetic analyses show that DOMADS1 and DOMADS2 are new members of the AGL2 subfamily and SQUA subfamily, respectively. DOMADS3 contains the signature amino acids as with the members in the independent OSMADS1 subfamily separated from the AGL2 subfamily. All three of the DOMADS genes were expressed in the TSAM during floral transition and later in mature flowers. DOMADS1 RNA was uniformly expressed in both of the inflorescence meristem and the floral primordium and later localized in all of the floral organs. DOMADS2 showed a novel expression pattern that has not been previously characterized for any other MADS-box genes. DOMADS2 transcript was expressed early in the 6-week-old vegetative shoot apical meristem in which the obvious morphological change to floral development had yet to occur. It was expressed throughout the process of floral transition and later in the columns of mature flowers. The onset of DOMADS3 transcription was in the early TSAM at the stage before the differentiation of the first flower primordium. Later, DOMADS3 transcript was only detectable in the pedicel tissues. Our results suggest that the DOMADS genes play important roles in the process of floral transition. PMID:10938351

  6. Meeting Air Transportation Demand in 2025 by Using Larger Aircraft and Alternative Routing to Complement NextGen Operational Improvements

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Guerreiro, Nelson M.; Viken, Jeffrey K.; Dollyhigh, Samuel M.; Fenbert, James W.

    2010-01-01

    A study was performed that investigates the use of larger aircraft and alternative routing to complement the capacity benefits expected from the Next Generation Air Transportation System (NextGen) in 2025. National Airspace System (NAS) delays for the 2025 demand projected by the Transportation Systems Analysis Models (TSAM) were assessed using NASA s Airspace Concept Evaluation System (ACES). The shift in demand from commercial airline to automobile and from one airline route to another was investigated by adding the route delays determined from the ACES simulation to the travel times used in the TSAM and re-generating new flight scenarios. The ACES simulation results from this study determined that NextGen Operational Improvements alone do not provide sufficient airport capacity to meet the projected demand for passenger air travel in 2025 without significant system delays. Using larger aircraft with more seats on high-demand routes and introducing new direct routes, where demand warrants, significantly reduces delays, complementing NextGen improvements. Another significant finding of this study is that the adaptive behavior of passengers to avoid congested airline-routes is an important factor when projecting demand for transportation systems. Passengers will choose an alternative mode of transportation or alternative airline routes to avoid congested routes, thereby reducing delays to acceptable levels for the 2025 scenario; the penalty being that alternative routes and the option to drive increases overall trip time by 0.4% and may be less convenient than the first-choice route.

  7. Methyl galbanate, a novel inhibitor of nitric oxide production in mouse macrophage RAW264.7 cells.

    PubMed

    Kohno, Susumu; Murata, Tomiyasu; Sugiura, Ayumi; Ito, Chihiro; Iranshahi, Mehrdad; Hikita, Kiyomi; Kaneda, Norio

    2011-04-01

    It is well known that inflammation is associated with various neurodegenerative diseases, such as Parkinson's disease and Alzheimer's disease. An inflammatory mediator, nitric oxide (NO), is produced by inducible NO synthase (iNOS) in microglia and seems to be one of the possible causes of neurodegeneration. Several natural and synthetic compounds which exert anti-inflammatory effects by inhibiting NO production have been reported to date. The aim of this work was to investigate whether any of the 6 terpenoid coumarins (methyl galbanate, galbanic acid, farnesiferol A, badrakemone, umbelliprenin, and aurapten) isolated from Ferula szowitsiana DC. have inhibitory activity against NO production in RAW264.7 mouse macrophage cells stimulated with lipopolysaccharide (LPS) and interferon-γ (IFN-γ). Of the 6 terpenoid coumarins tested, methyl galbanate significantly decreased NO production in LPS/IFN-γ-stimulated RAW264.7 cells. In the presence of methyl galbanate, LPS/IFN-γ-induced iNOS mRNA expression was significantly decreased to 52% of the level found with LPS/IFN-γ stimulation alone. Methyl galbanate slightly attenuated COX-2 mRNA expression. Using the RAW264.7-tsAM5NE co-culture system, we showed that methyl galbanate protected neuronally differentiated tsAM5NE cells from NO-induced cell death by inhibiting the production of NO. Our finding suggests that methyl galbanate may be useful for developing a new drug against neurodegenerative diseases.

  8. Projected Demand and Potential Impacts to the National Airspace System of Autonomous, Electric, On-Demand Small Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Viken, Jeffrey K.; Guerreiro, Nelson M.; Dollyhigh, Samuel M.; Fenbert, James W.; Hartman, Christopher L.; Kwa, Teck-Seng; Moore, Mark D.

    2012-01-01

    Electric propulsion and autonomy are technology frontiers that offer tremendous potential to achieve low operating costs for small-aircraft. Such technologies enable simple and safe to operate vehicles that could dramatically improve regional transportation accessibility and speed through point-to-point operations. This analysis develops an understanding of the potential traffic volume and National Airspace System (NAS) capacity for small on-demand aircraft operations. Future demand projections use the Transportation Systems Analysis Model (TSAM), a tool suite developed by NASA and the Transportation Laboratory of Virginia Polytechnic Institute. Demand projections from TSAM contain the mode of travel, number of trips and geographic distribution of trips. For this study, the mode of travel can be commercial aircraft, automobile and on-demand aircraft. NASA's Airspace Concept Evaluation System (ACES) is used to assess NAS impact. This simulation takes a schedule that includes all flights: commercial passenger and cargo; conventional General Aviation and on-demand small aircraft, and operates them in the simulated NAS. The results of this analysis projects very large trip numbers for an on-demand air transportation system competitive with automobiles in cost per passenger mile. The significance is this type of air transportation can enhance mobility for communities that currently lack access to commercial air transportation. Another significant finding is that the large numbers of operations can have an impact on the current NAS infrastructure used by commercial airlines and cargo operators, even if on-demand traffic does not use the 28 airports in the Continental U.S. designated as large hubs by the FAA. Some smaller airports will experience greater demand than their current capacity allows and will require upgrading. In addition, in future years as demand grows and vehicle performance improves other non-conventional facilities such as short runways incorporated into shopping mall or transportation hub parking areas could provide additional capacity and convenience.

  9. Identification and Analysis of National Airspace System Resource Constraints

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Marien, Ty V.; Viken, Jeffery K.; Neitzke, Kurt W.; Kwa, Tech-Seng; Dollyhigh, Samuel M.; Fenbert, James W.; Hinze, Nicolas K.

    2015-01-01

    This analysis is the deliverable for the Airspace Systems Program, Systems Analysis Integration and Evaluation Project Milestone for the Systems and Portfolio Analysis (SPA) focus area SPA.4.06 Identification and Analysis of National Airspace System (NAS) Resource Constraints and Mitigation Strategies. "Identify choke points in the current and future NAS. Choke points refer to any areas in the en route, terminal, oceanic, airport, and surface operations that constrain actual demand in current and projected future operations. Use the Common Scenarios based on Transportation Systems Analysis Model (TSAM) projections of future demand developed under SPA.4.04 Tools, Methods and Scenarios Development. Analyze causes, including operational and physical constraints." The NASA analysis is complementary to a NASA Research Announcement (NRA) "Development of Tools and Analysis to Evaluate Choke Points in the National Airspace System" Contract # NNA3AB95C awarded to Logistics Management Institute, Sept 2013.

  10. Sympatry influence in the interaction of Trypanosoma cruzi with triatomine.

    PubMed

    Dworak, Elaine Schultz; Araújo, Silvana Marques de; Gomes, Mônica Lúcia; Massago, Miyoko; Ferreira, Érika Cristina; Toledo, Max Jean de Ornelas

    2017-01-01

    Trypanosoma cruzi, the etiologic agent of Chagas disease, is widely distributed in nature, circulating between triatomine bugs and sylvatic mammals, and has large genetic diversity. Both the vector species and the genetic lineages of T. cruzi present a varied geographical distribution. This study aimed to verify the influence of sympatry in the interaction of T. cruzi with triatomines. Methods: The behavior of the strains PR2256 (T. cruzi II) and AM14 (T. cruzi IV) was studied in Triatoma sordida (TS) and Rhodnius robustus (RR). Eleven fifth-stage nymphs were fed by artificial xenodiagnosis with 5.6 × 103 blood trypomastigotes/0.1mL of each T. cruzi strain. Every 20 days, their excreta were examined for up to 100 days, and every 30 days, the intestinal content was examined for up to 120 days, by parasitological (fresh examination and differential count with Giemsa-stained smears) and molecular (PCR) methods. Rates of infectivity, metacyclogenesis and mortality, and mean number of parasites per insect and of excreted parasites were determined. Sympatric groups RR+AM14 and TS+PR2256 showed higher values of the four parameters, except for mortality rate, which was higher (27.3%) in the TS+AM14 group. General infectivity was 72.7%, which was mainly proven by PCR, showing the following decreasing order: RR+AM14 (100%), TS+PR2256 (81.8%), RR+PR2256 (72.7%) and TS+AM14 (36.4%). Our working hypothesis was confirmed once higher infectivity and vector capacity (flagellate production and elimination of infective metacyclic forms) were recorded in the groups that contained sympatric T. cruzi lineages and triatomine species.

  11. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  12. An Integrated Framework for Modeling Air Carrier Behavior, Policy, and Impacts in the U.S. Air Transportation System

    NASA Technical Reports Server (NTRS)

    Horio, Brant M.; Kumar, Vivek; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    The implementation of the Next Generation Air Transportation System (NextGen) in the United States is an ongoing challenge for policymakers due to the complexity of the air transportation system (ATS) with its broad array of stakeholders and dynamic interdependencies between them. The successful implementation of NextGen has a hard dependency on the active participation of U.S. commercial airlines. To assist policymakers in identifying potential policy designs that facilitate the implementation of NextGen, the National Aeronautics and Space Administration (NASA) and LMI developed a research framework called the Air Transportation System Evolutionary Simulation (ATS-EVOS). This framework integrates large empirical data sets with multiple specialized models to simulate the evolution of the airline response to potential future policies and explore consequential impacts on ATS performance and market dynamics. In the ATS-EVOS configuration presented here, we leverage the Transportation Systems Analysis Model (TSAM), the Airline Evolutionary Simulation (AIRLINE-EVOS), the Airspace Concept Evaluation System (ACES), and the Aviation Environmental Design Tool (AEDT), all of which enable this research to comprehensively represent the complex facets of the ATS and its participants. We validated this baseline configuration of ATS-EVOS against Airline Origin and Destination Survey (DB1B) data and subject matter expert opinion, and we verified the ATS-EVOS framework and agent behavior logic through scenario-based experiments that explored potential implementations of a carbon tax, congestion pricing policy, and the dynamics for equipage of new technology by airlines. These experiments demonstrated ATS-EVOS's capabilities in responding to a wide range of potential NextGen-related policies and utility for decision makers to gain insights for effective policy design.

  13. Methylene blue not ferrocene: Optimal reporters for electrochemical detection of protease activity.

    PubMed

    González-Fernández, Eva; Avlonitis, Nicolaos; Murray, Alan F; Mount, Andrew R; Bradley, Mark

    2016-10-15

    Electrochemical peptide-based biosensors are attracting significant attention for the detection and analysis of proteins. Here we report the optimisation and evaluation of an electrochemical biosensor for the detection of protease activity using self-assembled monolayers (SAMs) on gold surfaces, using trypsin as a model protease. The principle of detection was the specific proteolytic cleavage of redox-tagged peptides by trypsin, which causes the release of the redox reporter, resulting in a decrease of the peak current as measured by square wave voltammetry. A systematic enhancement of detection was achieved through optimisation of the properties of the redox-tagged peptide; this included for the first time a side-by-side study of the applicability of two of the most commonly applied redox reporters used for developing electrochemical biosensors, ferrocene and methylene blue, along with the effect of changing both the nature of the spacer and the composition of the SAM. Methylene blue-tagged peptides combined with a polyethylene-glycol (PEG) based spacer were shown to be the best platform for trypsin detection, leading to the highest fidelity signals (characterised by the highest sensitivity (signal gain) and a much more stable background than that registered when using ferrocene as a reporter). A ternary SAM (T-SAM) configuration, which included a PEG-based dithiol, minimised the non-specific adsorption of other proteins and was sensitive towards trypsin in the clinically relevant range, with a Limit of Detection (LoD) of 250pM. Kinetic analysis of the electrochemical response with time showed a good fit to a Michaelis-Menten surface cleavage model, enabling the extraction of values for kcat and KM. Fitting to this model enabled quantitative determination of the solution concentration of trypsin across the entire measurement range. Studies using an enzyme inhibitor and a range of real world possible interferents demonstrated a selective response to trypsin cleavage. This indicates that a PEG-based peptide, employing methylene blue as redox reporter, and deposited on an electrode as a ternary SAM configuration, is a suitable platform to develop clinically-relevant and quantitative electrochemical peptide-based protease biosensing. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Glacial morphology in the Chinese Pamir: Connections among climate, erosion, topography, lithology and exhumation

    NASA Astrophysics Data System (ADS)

    Schoenbohm, Lindsay M.; Chen, Jie; Stutz, Jamey; Sobel, Edward R.; Thiede, Rasmus C.; Kirby, Benjamin; Strecker, Manfred R.

    2014-09-01

    Modification of the landscape by glacial erosion reflects the dynamic interplay of climate through temperature, precipitation, and prevailing wind direction, and tectonics through rock uplift and exhumation rate, lithology, and range and fault geometry. We investigate these relationships in the northeast Pamir Mountains using mapping and dating of moraines and terraces to determine the glacial history. We analyze modern glacial morphology to determine glacier area, spacing, headwall relief, debris cover, and equilibrium line altitude (ELA) using the area x altitude balance ratio (AABR), toe-to-headwall altitude ratio (THAR) and toe-to-summit altitude method (TSAM) for 156 glaciers and compare this to lithologic, tectonic, and climatic data. We observe a pronounced asymmetry in glacial ELA, area, debris cover, and headwall relief that we interpret to reflect both structural and climatic control: glaciers on the downwind (eastern) side of the range are larger, more debris covered, have steeper headwalls, and tend to erode headward, truncating the smaller glaciers of the upwind, fault-controlled side of the range. We explain this by the transfer of moisture deep into the range as wind-blown or avalanched snow and by limitations imposed on glacial area on the upwind side of the range by the geometry of the Kongur extensional system (KES). The correspondence between rapid exhumation along the KES and maxima in glacier debris cover and headwall relief and minimums in all measures of ELA suggest that taller glacier headwalls develop in a response to more rapid exhumation rates. However, we find that glaciers in the Muji valley did not extend beyond the range front until at least 43 ka, in contrast to extensive glaciation since 300 ka in the south around the high peaks, a pattern which does not clearly reflect uplift rate. Instead, the difference in glacial history and the presence of large peaks (Muztagh Ata and Kongur Shan) with flanking glaciers likely reflects lithologic control (i.e., the location of crustal gneiss domes) and the formation of peaks that rise above the ELA and escape the glacial buzzsaw.

  15. Calculation of former ELA depressions in the Himalaya - a comparative analysis

    NASA Astrophysics Data System (ADS)

    Wagner, M.

    2009-04-01

    For the reconstruction of former Equilibrium Line Altitudes (ELA) and ELA depressions in the Himalaya, the group of the Toe-to-Summit-Altitude-Methods (TSAM) is most suited. In this investigation the Kuhle (1986) method that is particularly tailored to the extreme high mountain relief, as well as the widely used Höfer (1879) method and Louis (1954/55) method, have been applied. Applying the relief specific correction factor FSD (Factor for snowline deviation) in the Kuhle method, it is thereby possible to simulate the shifting position of the ELA within the vertical extension of the glacier in dependence on the relief characteristics and glacial type. The results of this work, carried out along the Kali Gandaki in central Nepal, illustrate that as a rule, the Louis method results in the highest ELAs and the lowest ELA depressions, while the Höfer method yields the lowest ELAs and the highest ELA depressions. In affirmation of the literature, the Louis method tends to overestimate the ELA, since using the maximum peak height, especially for large glaciers in mountain ranges with high relief energy, leads to an overly high position of the glacier upper limit. With respect to the Höfer method, the suspicion already voiced by Höfer (1879) himself, that with the use of his method, the for the Himalaya typically high elevated, and with marginal gradient toward the valley moving ridge progressions, would lead to a too low ELA, can be affirmed. Clearly to be disputed, however, is the statement of Gross et al. (1976) that the Höfer method leads to an overestimation of the ELA. The reason for this can be found in a wrong computation of the mean ridge height above the ELA and consequently of the ELA itself within the Höfer method, based on the erroneous assumption that otherwise the ELA could not be calculated due to a circular conclusion (Gross et al. 1976). As is evidenced by this study, the Kuhle method mediates between the empiric overly high values of the Louis method and the overly low values of the Höfer method, because of a mediating definition of the accumulation zone upper limit. Additionally, over the FSD, Kuhle allows for a high degree of adaptation to the extreme Himalaya relief, and within limitations from the change of the relief constellation, which stems from transverse valley's characteristics of the Kali Gandaki. Therefore, the results of the Kuhle method must be affirmed as reflecting the greatest conformity with the actual values of the ELA and the ELA depression. References: Gross, G., H. Kerschner, G. Patzelt (1976): Methodische Untersuchungen über die Schneegrenze in alpinen Gletschergebieten. Zeitschrift für Gletscherkunde und Glazialgeologie 12 (2): 223-251. Höfer von Heimhalt, H. (1879): Gletscher- und Eiszeitstudien. Sitzungsberichte der Akademie der Wissenschaften in Wien, Mathematisch-Naturwissenschaftliche Klasse, Abteilung I, Biologie, Mineralogie, Erdkunde 79: 331-367. Kuhle, M. (1986): Schneegrenzberechnung und typologische Klassifikation von Gletschern anhand spezifischer Reliefparameter. Petermanns Geographische Mitteilungen 130: 41-51. Louis, H. (1954/55): Schneegrenze und Schneegrenzbestimmung. Geographisches Taschenbuch 1954/55: 414-418.

  16. An Analysis Technique/Automated Tool for Comparing and Tracking Analysis Modes of Different Finite Element Models

    NASA Technical Reports Server (NTRS)

    Towner, Robert L.; Band, Jonathan L.

    2012-01-01

    An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.

  17. Comparative Protein Structure Modeling Using MODELLER

    PubMed Central

    Webb, Benjamin; Sali, Andrej

    2016-01-01

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. PMID:27322406

  18. Comparative Protein Structure Modeling Using MODELLER.

    PubMed

    Webb, Benjamin; Sali, Andrej

    2014-09-08

    Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Copyright © 2014 John Wiley & Sons, Inc.

  19. Comprehensive stroke units: a review of comparative evidence and experience.

    PubMed

    Chan, Daniel K Y; Cordato, Dennis; O'Rourke, Fintan; Chan, Daniel L; Pollack, Michael; Middleton, Sandy; Levi, Chris

    2013-06-01

    Stroke unit care offers significant benefits in survival and dependency when compared to general medical ward. Most stroke units are either acute or rehabilitation, but comprehensive (combined acute and rehabilitation) model (comprehensive stroke unit) is less common. To examine different levels of evidence of comprehensive stroke unit compared to other organized inpatient stroke care and share local experience of comprehensive stroke units. Cochrane Library and Medline (1980 to December 2010) review of English language articles comparing stroke units to alternative forms of stroke care delivery, different types of stroke unit models, and differences in processes of care within different stroke unit models. Different levels of comparative evidence of comprehensive stroke units to other models of stroke units are collected. There are no randomized controlled trials directly comparing comprehensive stroke units to other stroke unit models (either acute or rehabilitation). Comprehensive stroke units are associated with reduced length of stay and greatest reduction in combined death and dependency in a meta-analysis study when compared to other stroke unit models. Comprehensive stroke units also have better length of stay and functional outcome when compared to acute or rehabilitation stroke unit models in a cross-sectional study, and better length of stay in a 'before-and-after' comparative study. Components of stroke unit care that improve outcome are multifactorial and most probably include early mobilization. A comprehensive stroke unit model has been successfully implemented in metropolitan and rural hospital settings. Comprehensive stroke units are associated with reductions in length of stay and combined death and dependency and improved functional outcomes compared to other stroke unit models. A comprehensive stroke unit model is worth considering as the preferred model of stroke unit care in the planning and delivery of metropolitan and rural stroke services. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  20. Population Pharmacokinetic and Pharmacodynamic Model-Based Comparability Assessment of a Recombinant Human Epoetin Alfa and the Biosimilar HX575

    PubMed Central

    Yan, Xiaoyu; Lowe, Philip J.; Fink, Martin; Berghout, Alexander; Balser, Sigrid; Krzyzanski, Wojciech

    2012-01-01

    The aim of this study was to develop an integrated pharmacokinetic and pharmacodynamic (PK/PD) model and assess the comparability between epoetin alfa HEXAL/Binocrit (HX575) and a comparator epoetin alfa by a model-based approach. PK/PD data—including serum drug concentrations, reticulocyte counts, red blood cells, and hemoglobin levels—were obtained from 2 clinical studies. In sum, 149 healthy men received multiple intravenous or subcutaneous doses of HX575 (100 IU/kg) and the comparator 3 times a week for 4 weeks. A population model based on pharmacodynamics-mediated drug disposition and cell maturation processes was used to characterize the PK/PD data for the 2 drugs. Simulations showed that due to target amount changes, total clearance may increase up to 2.4-fold as compared with the baseline. Further simulations suggested that once-weekly and thrice-weekly subcutaneous dosing regimens would result in similar efficacy. The findings from the model-based analysis were consistent with previous results using the standard noncompartmental approach demonstrating PK/PD comparability between HX575 and comparator. However, due to complexity of the PK/PD model, control of random effects was not straightforward. Whereas population PK/PD model-based analyses are suited for studying complex biological systems, such models have their limitations (statistical), and their comparability results should be interpreted carefully. PMID:22162538

  1. Assessing intrinsic and specific vulnerability models ability to indicate groundwater vulnerability to groups of similar pesticides: A comparative study

    USGS Publications Warehouse

    Douglas, Steven; Dixon, Barnali; Griffin, Dale W.

    2018-01-01

    With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.

  2. Comparative Analysis of River Flow Modelling by Using Supervised Learning Technique

    NASA Astrophysics Data System (ADS)

    Ismail, Shuhaida; Mohamad Pandiahi, Siraj; Shabri, Ani; Mustapha, Aida

    2018-04-01

    The goal of this research is to investigate the efficiency of three supervised learning algorithms for forecasting monthly river flow of the Indus River in Pakistan, spread over 550 square miles or 1800 square kilometres. The algorithms include the Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Wavelet Regression (WR). The forecasting models predict the monthly river flow obtained from the three models individually for river flow data and the accuracy of the all models were then compared against each other. The monthly river flow of the said river has been forecasted using these three models. The obtained results were compared and statistically analysed. Then, the results of this analytical comparison showed that LSSVM model is more precise in the monthly river flow forecasting. It was found that LSSVM has he higher r with the value of 0.934 compared to other models. This indicate that LSSVM is more accurate and efficient as compared to the ANN and WR model.

  3. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  4. Customizing national models for a medical center's population to rapidly identify patients at high risk of 30-day all-cause hospital readmission following a heart failure hospitalization.

    PubMed

    Cox, Zachary L; Lai, Pikki; Lewis, Connie M; Lindenfeld, JoAnn; Collins, Sean P; Lenihan, Daniel J

    2018-05-28

    Nationally-derived models predicting 30-day readmissions following heart failure (HF) hospitalizations yield insufficient discrimination for institutional use. Develop a customized readmission risk model from Medicare-employed and institutionally-customized risk factors and compare the performance against national models in a medical center. Medicare patients age ≥ 65 years hospitalized for HF (n = 1,454) were studied in a derivation cohort and in a separate validation cohort (n = 243). All 30-day hospital readmissions were documented. The primary outcome was risk discrimination (c-statistic) compared to national models. A customized model demonstrated improved discrimination (c-statistic 0.72; 95% CI 0.69 - 0.74) compared to national models (c-statistics of 0.60 and 0.61) with a c-statistic of 0.63 in the validation cohort. Compared to national models, a customized model demonstrated superior readmission risk profiling by distinguishing a high-risk (38.3%) from a low-risk (9.4%) quartile. A customized model improved readmission risk discrimination from HF hospitalizations compared to national models. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  6. Experimental characterization of the AFIT neutron facility. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessard, O.J.

    1993-09-01

    AFIT's Neutron Facility was characterized for room-return neutrons using a (252)Cf source and a Bonner sphere spectrometer with three experimental models, the shadow shield, the Eisenhauer, Schwartz, and Johnson (ESJ), and the polynomial models. The free-field fluences at one meter from the ESJ and polynomial models were compared to the equivalent value from the accepted experimental shadow shield model to determine the suitability of the models in the AFIT facility. The polynomial model behaved erratically, as expected, while the ESJ model compared to within 4.8% of the shadow shield model results for the four Bonner sphere calibration. The ratio ofmore » total fluence to free-field fluence at one meter for the ESJ model was then compared to the equivalent ratio obtained by a Monte Cario Neutron-Photon transport code (MCNP), an accepted computational model. The ESJ model compared to within 6.2% of the MCNP results. AFIT's fluence ratios were compared to equivalent ratios reported by three other neutron facilities which verified that AFIT's results fit previously published trends based on room volumes. The ESJ model appeared adequate for health physics applications and was chosen was chosen for calibration of the AFIT facility. Neutron Detector, Bonner Sphere, Neutron Dosimetry, Room Characterization.« less

  7. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability.

    PubMed

    Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell

    2014-05-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.

  8. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  9. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  10. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  11. Comparing Supply-Side Specifications in Models of Global Agriculture and the Food System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sherman; van Meijl, Hans; Willenbockel, Dirk

    This paper compares the theoretical specification of production and technical change across the partial equilibrium (PE) and computable general equilibrium (CGE) models of the global agricultural and food system included in the AgMIP model comparison study. The two modeling approaches have different theoretical underpinnings concerning the scope of economic activity they capture and how they represent technology and the behavior of supply and demand in markets. This paper focuses on their different specifications of technology and supply behavior, comparing their theoretical and empirical treatments. While the models differ widely in their specifications of technology, both within and between the PEmore » and CGE classes of models, we find that the theoretical responsiveness of supply to changes in prices can be similar, depending on parameter choices that define the behavior of supply functions over the domain of applicability defined by the common scenarios used in the AgMIP comparisons. In particular, we compare the theoretical specification of supply in CGE models with neoclassical production functions and PE models that focus on land and crop yields in agriculture. In practice, however, comparability of results given parameter choices is an empirical question, and the models differ in their sensitivity to variations in specification. To illustrate the issues, sensitivity analysis is done with one global CGE model, MAGNET, to indicate how the results vary with different specification of technical change, and how they compare with the results from PE models.« less

  12. Two Models for Implementing Senior Mentor Programs in Academic Medical Settings

    ERIC Educational Resources Information Center

    Corwin, Sara J.; Bates, Tovah; Cohan, Mary; Bragg, Dawn S.; Roberts, Ellen

    2007-01-01

    This paper compares two models of undergraduate geriatric medical education utilizing senior mentoring programs. Descriptive, comparative multiple-case study was employed analyzing program documents, archival records, and focus group data. Themes were compared for similarities and differences between the two program models. Findings indicate that…

  13. How does a three-dimensional continuum muscle model affect the kinematics and muscle strains of a finite element neck model compared to a discrete muscle model in rear-end, frontal, and lateral impacts.

    PubMed

    Hedenstierna, Sofia; Halldin, Peter

    2008-04-15

    A finite element (FE) model of the human neck with incorporated continuum or discrete muscles was used to simulate experimental impacts in rear, frontal, and lateral directions. The aim of this study was to determine how a continuum muscle model influences the impact behavior of a FE human neck model compared with a discrete muscle model. Most FE neck models used for impact analysis today include a spring element musculature and are limited to discrete geometries and nodal output results. A solid-element muscle model was thought to improve the behavior of the model by adding properties such as tissue inertia and compressive stiffness and by improving the geometry. It would also predict the strain distribution within the continuum elements. A passive continuum muscle model with nonlinear viscoelastic materials was incorporated into the KTH neck model together with active spring muscles and used in impact simulations. The resulting head and vertebral kinematics was compared with the results from a discrete muscle model as well as volunteer corridors. The muscle strain prediction was compared between the 2 muscle models. The head and vertebral kinematics were within the volunteer corridors for both models when activated. The continuum model behaved more stiffly than the discrete model and needed less active force to fit the experimental results. The largest difference was seen in the rear impact. The strain predicted by the continuum model was lower than for the discrete model. The continuum muscle model stiffened the response of the KTH neck model compared with a discrete model, and the strain prediction in the muscles was improved.

  14. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  15. Comparative efficacy of oral meloxicam and phenylbutazone in 2 experimental pain models in the horse

    PubMed Central

    Banse, Heidi; Cribb, Alastair E.

    2017-01-01

    The efficacy of oral phenylbutazone [PBZ; 4.4 mg/kg body weight (BW), q12h], a non-selective non-steroidal anti-inflammatory drug (NSAID), and oral meloxicam (MXM; 0.6 mg/kg BW, q24h), a COX-2 selective NSAID, were evaluated in 2 experimental pain models in horses: the adjustable heart bar shoe (HBS) model, primarily representative of mechanical pain, and the lipopolysaccharide-induced synovitis (SYN) model, primarily representative of inflammatory pain. In the HBS model, PBZ reduced multiple indicators of pain compared with the placebo and MXM. Meloxicam did not reduce indicators of pain relative to the placebo. In the SYN model, MXM and PBZ reduced increases in carpal skin temperature compared to the placebo. Meloxicam reduced lameness scores and lameness-induced changes in head movement compared to the placebo and PBZ. Phenylbutazone reduced lameness-induced change in head movement compared to the placebo. Overall, PBZ was more effective than MXM at reducing pain in the HBS model, while MXM was more effective at reducing pain in the SYN model at the oral doses used. PMID:28216685

  16. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  17. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  18. Telepsychiatry as an Economically Better Model for Reaching the Unreached: A Retrospective Report from South India

    PubMed Central

    Moirangthem, Sydney; Rao, Sabina; Kumar, Channaveerachari Naveen; Narayana, Manjunatha; Raviprakash, Neelaveni; Math, Suresh Bada

    2017-01-01

    Aim: In a resource-poor country such as India, telepsychiatry could be an economical method to expand health-care services. This study was planned to compare the costing and feasibility of three different service delivery models. The end user was a state-funded long-stay Rehabilitation Center (RC) for the homeless. Methodology: Model A comprised patients going to a tertiary care center for clinical care, Model B was community outreach service, and Model C comprised telepsychiatry services. The costing included expenses incurred by the health system to complete a single consultation for a patient on an outpatient basis. It specifically excluded the cost borne by the care-receiver. No patients were interviewed for the study. Results: The RC had 736 inmates, of which 341 had mental illness of very long duration. On comparing the costing, Model A costed 6047.5 INR (100$), Model B costed 577.1 INR (9.1$), and Model C costed 137.2 INR (2.2$). Model C was found fifty times more economical when compared to Model A and four times more economical when compared to Model B. Conclusion: Telepsychiatry services connecting tertiary center and a primary health-care center have potential to be an economical model of service delivery compared to other traditional ones. This resource needs to be tapped in a better fashion to reach the unreached. PMID:28615759

  19. Telepsychiatry as an Economically Better Model for Reaching the Unreached: A Retrospective Report from South India.

    PubMed

    Moirangthem, Sydney; Rao, Sabina; Kumar, Channaveerachari Naveen; Narayana, Manjunatha; Raviprakash, Neelaveni; Math, Suresh Bada

    2017-01-01

    In a resource-poor country such as India, telepsychiatry could be an economical method to expand health-care services. This study was planned to compare the costing and feasibility of three different service delivery models. The end user was a state-funded long-stay Rehabilitation Center (RC) for the homeless. Model A comprised patients going to a tertiary care center for clinical care, Model B was community outreach service, and Model C comprised telepsychiatry services. The costing included expenses incurred by the health system to complete a single consultation for a patient on an outpatient basis. It specifically excluded the cost borne by the care-receiver. No patients were interviewed for the study. The RC had 736 inmates, of which 341 had mental illness of very long duration. On comparing the costing, Model A costed 6047.5 INR (100$), Model B costed 577.1 INR (9.1$), and Model C costed 137.2 INR (2.2$). Model C was found fifty times more economical when compared to Model A and four times more economical when compared to Model B. Telepsychiatry services connecting tertiary center and a primary health-care center have potential to be an economical model of service delivery compared to other traditional ones. This resource needs to be tapped in a better fashion to reach the unreached.

  20. Modeling discourse management compared to other classroom management styles in university physics

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain Michael

    2002-01-01

    A classroom management technique called modeling discourse management was developed to enhance the modeling theory of physics. Modeling discourse management is a student-centered management that focuses on the epistemology of science. Modeling discourse is social constructivist in nature and was designed to encourage students to present classroom material to each other. In modeling discourse management, the instructor's primary role is of questioner rather than provider of knowledge. Literature is presented that helps validate the components of modeling discourse. Modeling discourse management was compared to other classroom management styles using multiple measures. Both regular and honors university physics classes were investigated. This style of management was found to enhance student understanding of forces, problem-solving skills, and student views of science compared to traditional classroom management styles for both honors and regular students. Compared to other reformed physics classrooms, modeling discourse classes performed as well or better on student understanding of forces. Outside evaluators viewed modeling discourse classes to be reformed, and it was determined that modeling discourse could be effectively disseminated.

  1. Comparative Study on the Prediction of Aerodynamic Characteristics of Aircraft with Turbulence Models

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Huh, Jinbum; Lee, Namhun; Lee, Seungsoo; Park, Youngmin

    2018-04-01

    The RANS equations are widely used to analyze complex flows over aircraft. The equations require a turbulence model for turbulent flow analyses. A suitable turbulence must be selected for accurate predictions of aircraft aerodynamic characteristics. In this study, numerical analyses of three-dimensional aircraft are performed to compare the results of various turbulence models for the prediction of aircraft aerodynamic characteristics. A 3-D RANS solver, MSAPv, is used for the aerodynamic analysis. The four turbulence models compared are the Sparlart-Allmaras (SA) model, Coakley's q-ω model, Huang and Coakley's k-ɛ model, and Menter's k-ω SST model. Four aircrafts are considered: an ARA-M100, DLR-F6 wing-body, DLR-F6 wing-body-nacelle-pylon from the second drag prediction workshop, and a high wing aircraft with nacelles. The CFD results are compared with experimental data and other published computational results. The details of separation patterns, shock positions, and Cp distributions are discussed to find the characteristics of the turbulence models.

  2. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    PubMed

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Comparative dynamics in a health investment model.

    PubMed

    Eisenring, C

    1999-10-01

    The method of comparative dynamics fully exploits the inter-temporal structure of optimal control models. I derive comparative dynamic results in a simplified demand for health model. The effect of a change in the depreciation rate on the optimal paths for health capital and investment in health is studied by use of a phase diagram.

  4. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    ERIC Educational Resources Information Center

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  5. A Comparison of Graded Response and Rasch Partial Credit Models with Subjective Well-Being.

    ERIC Educational Resources Information Center

    Baker, John G.; Rounds, James B.; Zevon, Michael A.

    2000-01-01

    Compared two multiple category item response theory models using a data set of 52 mood terms with 713 undergraduate psychology students. Comparative model fit for the Samejima (F. Samejima, 1966) logistic model for graded responses and the Masters (G. Masters, 1982) partial credit model favored the former model for this data set. (SLD)

  6. A Comparative of business process modelling techniques

    NASA Astrophysics Data System (ADS)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  7. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  8. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  9. Quantitative structure-activity relationship of organosulphur compounds as soybean 15-lipoxygenase inhibitors using CoMFA and CoMSIA.

    PubMed

    Caballero, Julio; Fernández, Michael; Coll, Deysma

    2010-12-01

    Three-dimensional quantitative structure-activity relationship studies were carried out on a series of 28 organosulphur compounds as 15-lipoxygenase inhibitors using comparative molecular field analysis and comparative molecular similarity indices analysis. Quantitative information on structure-activity relationships is provided for further rational development and direction of selective synthesis. All models were carried out over a training set including 22 compounds. The best comparative molecular field analysis model only included steric field and had a good Q² = 0.789. Comparative molecular similarity indices analysis overcame the comparative molecular field analysis results: the best comparative molecular similarity indices analysis model also only included steric field and had a Q² = 0.894. In addition, this model predicted adequately the compounds contained in the test set. Furthermore, plots of steric comparative molecular similarity indices analysis field allowed conclusions to be drawn for the choice of suitable inhibitors. In this sense, our model should prove useful in future 15-lipoxygenase inhibitor design studies. © 2010 John Wiley & Sons A/S.

  10. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins.

    PubMed

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.

  11. A comparative assessment of preclinical chemotherapeutic response of tumors using quantitative non-Gaussian diffusion MRI

    PubMed Central

    Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.

    2016-01-01

    Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785

  12. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  13. Comparing i-Tree modeled ozone deposition with field measurements in a periurban Mediterranean forest

    Treesearch

    A. Morani; D. Nowak; S. Hirabayashi; G. Guidolotti; M. Medori; V. Muzzini; S. Fares; G. Scarascia Mugnozza; C. Calfapietra

    2014-01-01

    Ozone flux estimates from the i-Tree model were compared with ozone flux measurements using the Eddy Covariance technique in a periurban Mediterranean forest near Rome (Castelporziano). For the first time i-Tree model outputs were compared with field measurements in relation to dry deposition estimates. Results showed generally a...

  14. EPA ALPHA Modeling of a Conventional Mid-Size Car with CVT and Comparable Powertrain Technologies (SAE 2016-01-1141)

    EPA Science Inventory

    This paper presents the testing and ALPHA modeling of a CVT-equipped 2013 Nissan Altima 2.5S using comparable powertrain technology inputs in the effort to model the current and future U.S. light-duty vehicle fleet approximated using components with comparable levels of performan...

  15. Cardiac arrest risk standardization using administrative data compared to registry data.

    PubMed

    Grossestreuer, Anne V; Gaieski, David F; Donnino, Michael W; Nelson, Joshua I M; Mutter, Eric L; Carr, Brendan G; Abella, Benjamin S; Wiebe, Douglas J

    2017-01-01

    Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Two risk standardization logistic regression models were developed using 2453 patients treated from 2000-2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the "gold standard" with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876-0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895-0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799-0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788-0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.

  16. Cardiac arrest risk standardization using administrative data compared to registry data

    PubMed Central

    Gaieski, David F.; Donnino, Michael W.; Nelson, Joshua I. M.; Mutter, Eric L.; Carr, Brendan G.; Abella, Benjamin S.; Wiebe, Douglas J.

    2017-01-01

    Background Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Methods and results Two risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Conclusions Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA. PMID:28783754

  17. A framework for understanding cancer comparative effectiveness research data needs.

    PubMed

    Carpenter, William R; Meyer, Anne-Marie; Abernethy, Amy P; Stürmer, Til; Kosorok, Michael R

    2012-11-01

    Randomized controlled trials remain the gold standard for evaluating cancer intervention efficacy. Randomized trials are not always feasible, practical, or timely and often don't adequately reflect patient heterogeneity and real-world clinical practice. Comparative effectiveness research can leverage secondary data to help fill knowledge gaps randomized trials leave unaddressed; however, comparative effectiveness research also faces shortcomings. The goal of this project was to develop a new model and inform an evolving framework articulating cancer comparative effectiveness research data needs. We examined prevalent models and conducted semi-structured discussions with 76 clinicians and comparative effectiveness research researchers affiliated with the Agency for Healthcare Research and Quality's cancer comparative effectiveness research programs. A new model was iteratively developed and presents cancer comparative effectiveness research and important measures in a patient-centered, longitudinal chronic care model better reflecting contemporary cancer care in the context of the cancer care continuum, rather than a single-episode, acute-care perspective. Immediately relevant for federally funded comparative effectiveness research programs, the model informs an evolving framework articulating cancer comparative effectiveness research data needs, including evolutionary enhancements to registries and epidemiologic research data systems. We discuss elements of contemporary clinical practice, methodology improvements, and related needs affecting comparative effectiveness research's ability to yield findings clinicians, policy makers, and stakeholders can confidently act on. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. A framework for understanding cancer comparative effectiveness research data needs

    PubMed Central

    Carpenter, William R; Meyer, Anne-Marie; Abernethy, Amy P.; Stürmer, Til; Kosorok, Michael R.

    2012-01-01

    Objective Randomized controlled trials remain the gold standard for evaluating cancer intervention efficacy. Randomized trials are not always feasible, practical, or timely, and often don’t adequately reflect patient heterogeneity and real-world clinical practice. Comparative effectiveness research can leverage secondary data to help fill knowledge gaps randomized trials leave unaddressed; however, comparative effectiveness research also faces shortcomings. The goal of this project was to develop a new model and inform an evolving framework articulating cancer comparative effectiveness research data needs. Study Design and Setting We examined prevalent models and conducted semi-structured discussions with 76 clinicians and comparative effectiveness research researchers affiliated with the Agency for Healthcare Research and Quality’s cancer comparative effectiveness research programs. Results A new model was iteratively developed, and presents cancer comparative effectiveness research and important measures in a patient-centered, longitudinal chronic care model better-reflecting contemporary cancer care in the context of the cancer care continuum, rather than a single-episode, acute-care perspective. Conclusion Immediately relevant for federally-funded comparative effectiveness research programs, the model informs an evolving framework articulating cancer comparative effectiveness research data needs, including evolutionary enhancements to registries and epidemiologic research data systems. We discuss elements of contemporary clinical practice, methodology improvements, and related needs affecting comparative effectiveness research’s ability to yield findings clinicians, policymakers, and stakeholders can confidently act on. PMID:23017633

  19. Small-molecule ligand docking into comparative models with Rosetta

    PubMed Central

    Combs, Steven A; DeLuca, Samuel L; DeLuca, Stephanie H; Lemmon, Gordon H; Nannemann, David P; Nguyen, Elizabeth D; Willis, Jordan R; Sheehan, Jonathan H; Meiler, Jens

    2017-01-01

    Structure-based drug design is frequently used to accelerate the development of small-molecule therapeutics. Although substantial progress has been made in X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, the availability of high-resolution structures is limited owing to the frequent inability to crystallize or obtain sufficient NMR restraints for large or flexible proteins. Computational methods can be used to both predict unknown protein structures and model ligand interactions when experimental data are unavailable. This paper describes a comprehensive and detailed protocol using the Rosetta modeling suite to dock small-molecule ligands into comparative models. In the protocol presented here, we review the comparative modeling process, including sequence alignment, threading and loop building. Next, we cover docking a small-molecule ligand into the protein comparative model. In addition, we discuss criteria that can improve ligand docking into comparative models. Finally, and importantly, we present a strategy for assessing model quality. The entire protocol is presented on a single example selected solely for didactic purposes. The results are therefore not representative and do not replace benchmarks published elsewhere. We also provide an additional tutorial so that the user can gain hands-on experience in using Rosetta. The protocol should take 5–7 h, with additional time allocated for computer generation of models. PMID:23744289

  20. Fold assessment for comparative protein structure modeling.

    PubMed

    Melo, Francisco; Sali, Andrej

    2007-11-01

    Accurate and automated assessment of both geometrical errors and incompleteness of comparative protein structure models is necessary for an adequate use of the models. Here, we describe a composite score for discriminating between models with the correct and incorrect fold. To find an accurate composite score, we designed and applied a genetic algorithm method that searched for a most informative subset of 21 input model features as well as their optimized nonlinear transformation into the composite score. The 21 input features included various statistical potential scores, stereochemistry quality descriptors, sequence alignment scores, geometrical descriptors, and measures of protein packing. The optimized composite score was found to depend on (1) a statistical potential z-score for residue accessibilities and distances, (2) model compactness, and (3) percentage sequence identity of the alignment used to build the model. The accuracy of the composite score was compared with the accuracy of assessment by single and combined features as well as by other commonly used assessment methods. The testing set was representative of models produced by automated comparative modeling on a genomic scale. The composite score performed better than any other tested score in terms of the maximum correct classification rate (i.e., 3.3% false positives and 2.5% false negatives) as well as the sensitivity and specificity across the whole range of thresholds. The composite score was implemented in our program MODELLER-8 and was used to assess models in the MODBASE database that contains comparative models for domains in approximately 1.3 million protein sequences.

  1. Dependability and performability analysis

    NASA Technical Reports Server (NTRS)

    Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.

    1993-01-01

    Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.

  2. Calculating osmotic pressure of glucose solutions according to ASOG model and measuring it with air humidity osmometry.

    PubMed

    Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong

    2016-09-01

    The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.

  3. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches.

    PubMed

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.

  4. Wildfire ignition-distribution modelling: a comparative study in the Huron-Manistee National Forest, Michigan, USA

    Treesearch

    Avi Bar Massada; Alexandra D. Syphard; Susan I. Stewart; Volker C. Radeloff

    2012-01-01

    Wildfire ignition distribution models are powerful tools for predicting the probability of ignitions across broad areas, and identifying their drivers. Several approaches have been used for ignition-distribution modelling, yet the performance of different model types has not been compared. This is unfortunate, given that conceptually similar species-distribution models...

  5. A Comparative Structural Equation Modeling Investigation of the Relationships among Teaching, Cognitive and Social Presence

    ERIC Educational Resources Information Center

    Kozan, Kadir

    2016-01-01

    The present study investigated the relationships among teaching, cognitive, and social presence through several structural equation models to see which model would better fit the data. To this end, the present study employed and compared several different structural equation models because different models could fit the data equally well. Among…

  6. Comparing models for growth and management of forest tracts

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2003-01-01

    The Stand Damage Model (SDM) is a PC-based model that is easily installed, calibrated and initialized for use in exploring the future growth and management of forest stands or small wood lots. We compare the basic individual tree growth model incorporated in this model with alternative models that predict the basal area growth of trees. The SDM is a gap-type simulator...

  7. BRIDGE: A Simulation Model for Comparing the Costs of Expanding a Campus Using Distributed Instruction versus Classroom Instruction. Documentation and Instructions.

    ERIC Educational Resources Information Center

    Jewett, Frank

    These instructions describe the use of BRIDGE, a computer software simulation model that is designed to compare the costs of expanding a college campus using distributed instruction (television or asynchronous network courses) versus the costs of expanding using lecture/lab type instruction. The model compares the projected operating and capital…

  8. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Cancer.gov

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  9. Item Response Modeling of Paired Comparison and Ranking Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Brown, Anna

    2010-01-01

    The comparative format used in ranking and paired comparisons tasks can significantly reduce the impact of uniform response biases typically associated with rating scales. Thurstone's (1927, 1931) model provides a powerful framework for modeling comparative data such as paired comparisons and rankings. Although Thurstonian models are generally…

  10. The Soil Model Development and Intercomparison Panel (SoilMIP) of the International Soil Modeling Consortium (ISMC)

    NASA Astrophysics Data System (ADS)

    Vanderborght, Jan; Priesack, Eckart

    2017-04-01

    The Soil Model Development and Intercomparison Panel (SoilMIP) is an initiative of the International Soil Modeling Consortium. Its mission is to foster the further development of soil models that can predict soil functions and their changes (i) due to soil use and land management and (ii) due to external impacts of climate change and pollution. Since soil functions and soil threats are diverse but linked with each other, the overall aim is to develop holistic models that represent the key functions of the soil system and the links between them. These models should be scaled up and integrated in terrestrial system models that describe the feedbacks between processes in the soil and the other terrestrial compartments. We propose and illustrate a few steps that could be taken to achieve these goals. A first step is the development of scenarios that compare simulations by models that predict the same or different soil services. Scenarios can be considered at three different levels of comparisons: scenarios that compare the numerics (accuracy but also speed) of models, scenarios that compare the effect of differences in process descriptions, and scenarios that compare simulations with experimental data. A second step involves the derivation of metrics or summary statistics that effectively compare model simulations and disentangle parameterization from model concept differences. These metrics can be used to evaluate how more complex model simulations can be represented by simpler models using an appropriate parameterization. A third step relates to the parameterization of models. Application of simulation models implies that appropriate model parameters have to be defined for a range of environmental conditions and locations. Spatial modelling approaches are used to derive parameter distributions. Considering that soils and their properties emerge from the interaction between physical, chemical and biological processes, the combination of spatial models with process models would lead to consistent parameter distributions correlations and could potentially represent self-organizing processes in soils and landscapes.

  11. Biomechanical effects of hybrid stabilization on the risk of proximal adjacent-segment degeneration following lumbar spinal fusion using an interspinous device or a pedicle screw-based dynamic fixator.

    PubMed

    Lee, Chang-Hyun; Kim, Young Eun; Lee, Hak Joong; Kim, Dong Gyu; Kim, Chi Heon

    2017-12-01

    OBJECTIVE Pedicle screw-rod-based hybrid stabilization (PH) and interspinous device-based hybrid stabilization (IH) have been proposed to prevent adjacent-segment degeneration (ASD) and their effectiveness has been reported. However, a comparative study based on sound biomechanical proof has not yet been reported. The aim of this study was to compare the biomechanical effects of IH and PH on the transition and adjacent segments. METHODS A validated finite element model of the normal lumbosacral spine was used. Based on the normal model, a rigid fusion model was immobilized at the L4-5 level by a rigid fixator. The DIAM or NFlex model was added on the L3-4 segment of the fusion model to construct the IH and PH models, respectively. The developed models simulated 4 different loading directions using the hybrid loading protocol. RESULTS Compared with the intact case, fusion on L4-5 produced 18.8%, 9.3%, 11.7%, and 13.7% increments in motion at L3-4 under flexion, extension, lateral bending, and axial rotation, respectively. Additional instrumentation at L3-4 (transition segment) in hybrid models reduced motion changes at this level. The IH model showed 8.4%, -33.9%, 6.9%, and 2.0% change in motion at the segment, whereas the PH model showed -30.4%, -26.7%, -23.0%, and 12.9%. At L2-3 (adjacent segment), the PH model showed 14.3%, 3.4%, 15.0%, and 0.8% of motion increment compared with the motion in the IH model. Both hybrid models showed decreased intradiscal pressure (IDP) at the transition segment compared with the fusion model, but the pressure at L2-3 (adjacent segment) increased in all loading directions except under extension. CONCLUSIONS Both IH and PH models limited excessive motion and IDP at the transition segment compared with the fusion model. At the segment adjacent to the transition level, PH induced higher stress than IH model. Such differences may eventually influence the likelihood of ASD.

  12. Investigation of the flight mechanics simulation of a hovering helicopter

    NASA Technical Reports Server (NTRS)

    Chaimovich, M.; Rosen, A.; Rand, O.; Mansur, M. H.; Tischler, M. B.

    1992-01-01

    The flight mechanics simulation of a hovering helicopter is investigated by comparing the results of two different numerical models with flight test data for a hovering AH-64 Apache. The two models are the U.S. Army BEMAP and the Technion model. These nonlinear models are linearized by applying a numerical linearization procedure. The results of the linear models are compared with identification results in terms of eigenvalues, stability and control derivatives, and frequency responses. Detailed time histories of the responses of the complete nonlinear models, as a result of various pilots' inputs, are compared with flight test results. In addition the sensitivity of the models to various effects are also investigated. The results are discussed and problematic aspects of the simulation are identified.

  13. Comparing Families of Dynamic Causal Models

    PubMed Central

    Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.

    2010-01-01

    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649

  14. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    NASA Astrophysics Data System (ADS)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but based solely on literature or judgement and is usually used to compare alternatives. In many cases, a combination is employed where the model is calibrated to a portion of the data (e.g., to determine hydrodynamics), then used to compare alternatives. Three subsurface-based multimedia examples are presented, increasing in complexity. The first presents the application of a predictive, deterministic assessment; the second presents a predictive and comparative, Monte Carlo analysis; and the third presents a comparative, multi-dimensional Monte Carlo analysis. Endpoints are typically presented in terms of concentration, hazard, risk, and dose, and because the vadose zone model typically represents a connection between a source and the aquifer, it does not generally represent the final medium in a multimedia risk assessment.

  15. Canis familiaris As a Model for Non-Invasive Comparative Neuroscience.

    PubMed

    Bunford, Nóra; Andics, Attila; Kis, Anna; Miklósi, Ádám; Gácsi, Márta

    2017-07-01

    There is an ongoing need to improve animal models for investigating human behavior and its biological underpinnings. The domestic dog (Canis familiaris) is a promising model in cognitive neuroscience. However, before it can contribute to advances in this field in a comparative, reliable, and valid manner, several methodological issues warrant attention. We review recent non-invasive canine neuroscience studies, primarily focusing on (i) variability among dogs and between dogs and humans in cranial characteristics, and (ii) generalizability across dog and dog-human studies. We argue not for methodological uniformity but for functional comparability between methods, experimental designs, and neural responses. We conclude that the dog may become an innovative and unique model in comparative neuroscience, complementing more traditional models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. [Protective effect of Saccharomyces boulardii against intestinal mucosal barrier injury in rats with nonalcoholic fatty liver disease].

    PubMed

    Liu, Y T; Li, Y Q; Wang, Y Z

    2016-12-20

    Objective: To investigate the protective effect of Saccharomyces boulardii against intestinal mucosal barrier injury in rats with nonalcoholic fatty liver disease (NAFLD). Methods: A total of 36 healthy male Sprague-Dawley rats with a mean body weight of 180±20 g were randomly divided into control group, model group, and treatment group, with 12 rats in each group, after adaptive feeding for 1 week. The rats in the control group were given basic feed, and those in the model group and treatment group were given high-fat feed. After 12 weeks of feeding, the treatment group was given Saccharomyces boulardii (75×10 8 CFU/kg/d) by gavage, and those in the control group and model group were given isotonic saline by gavage. At the 20th week, blood samples were taken from the abdominal aorta to measure the levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), triglyceride (TG), intestinal fatty acid binding protein (IFABP), tumor necrosis factor-α (TNF-α), and endotoxins. The liver pathological changes, intestinal histopathological changes, and expression of occludin in the intestinal mucosa were observed. Fecal samples were collected to measure the changes in Escherichia coli and Bacteroides. A one-way analysis of variance and the SNK test were used for comparison between multiple groups, and the rank sum test was used as the non-parametric test. Results: Compared with the control group, the model group had significantly higher body weight, liver mass, and liver index ( P < 0.05), and compared with the model group, the treatment group had significant reductions in body weight, liver mass, and liver index ( P < 0.05). The model group had significant increases in TG, ALT, and AST compared with the control group ( P < 0.05), the treatment group had a significant reduction in AST compared with the model group ( P < 0.05), and the treatment group had slight reductions in TG and ALT compared with the model group ( P > 0.05). Compared with the control group, the model group had significant increases in the levels of endotoxin, TNF-α, and IFABP ( P < 0.05), and the treatment group had significant reductions in the levels of endotoxin, TNF-α, and IFABP ( P < 0.05). Liver tissue staining showed that the model group had significantly increased hepatocyte steatosis compared with the control group ( P < 0.05), and that the treatment group had significantly reduced hepatocyte steatosis compared with the model group ( P < 0.05). The intestinal villi in the control group had ordered arrangement and a complete structure; in the model group, the intestinal villi were shortened with local shedding and a lack of ordered arrangement; compared with the model group, the treatment group had mild edema and ordered arrangements of the intestinal villi. The model group had a significantly reduced level of occludin protein compared with the control group ( P < 0.05), and the treatment group had a slight increase compared with the model group. The model group had a significantly increased number of Escherichia coli and a significantly reduced number of Bacteroides compared with the control group ( P < 0.05), and the treatment group had a significantly reduced number of Escherichia coli and a significantly increased number of Bacteroides compared with the model group ( P < 0.05). Conclusion: High-fat diet can successfully induce NAFLD in rats, and intervention with Saccharomyces boulardii can reduce body weight and improve hepatocyte steatosis. Saccharomyces boulardii can reduce endotoxemia in NAFLD rats and thus alleviate inflammatory response. Saccharomyces boulardii can also adjust the proportion of Escherichia coli and Bacteroides in the intestine of NAFLD rats.

  17. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Treesearch

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  18. Ability Estimation and Item Calibration Using the One and Three Parameter Logistic Models: A Comparative Study. Research Report 77-1.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…

  19. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  20. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  1. Confounder summary scores when comparing the effects of multiple drug exposures.

    PubMed

    Cadarette, Suzanne M; Gagne, Joshua J; Solomon, Daniel H; Katz, Jeffrey N; Stürmer, Til

    2010-01-01

    Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS), and disease risk score (DRS). Each method was applied to a dataset (2000-2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin, and raloxifene in preventing non-vertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS). Less than 8% deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in -7 to + 13% deviation from our base estimate. With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.

  2. Exploration of freely available web-interfaces for comparative homology modelling of microbial proteins

    PubMed Central

    Nema, Vijay; Pal, Sudhir Kumar

    2013-01-01

    Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424

  3. Palm oil price forecasting model: An autoregressive distributed lag (ARDL) approach

    NASA Astrophysics Data System (ADS)

    Hamid, Mohd Fahmi Abdul; Shabri, Ani

    2017-05-01

    Palm oil price fluctuated without any clear trend or cyclical pattern in the last few decades. The instability of food commodities price causes it to change rapidly over time. This paper attempts to develop Autoregressive Distributed Lag (ARDL) model in modeling and forecasting the price of palm oil. In order to use ARDL as a forecasting model, this paper modifies the data structure where we only consider lagged explanatory variables to explain the variation in palm oil price. We then compare the performance of this ARDL model with a benchmark model namely ARIMA in term of their comparative forecasting accuracy. This paper also utilize ARDL bound testing approach to co-integration in examining the short run and long run relationship between palm oil price and its determinant; production, stock, and price of soybean as the substitute of palm oil and price of crude oil. The comparative forecasting accuracy suggests that ARDL model has a better forecasting accuracy compared to ARIMA.

  4. Comparing the Discrete and Continuous Logistic Models

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  5. Research, Training, and Practice: The Normative Model and Beyond.

    ERIC Educational Resources Information Center

    Evertson, Carolyn M.

    Four specific purposes were addressed in this study: (1) to identify models of classroom management and instructional management used by effective and less effective teachers; (2) to compare and contrast these models; (3) to compare and contrast a normative model of classroom management used in management training workshops with the models…

  6. Self-Pressurization and Spray Cooling Simulations of the Multipurpose Hydrogen Test Bed (MHTB) Ground-Based Experiment

    NASA Technical Reports Server (NTRS)

    Kartuzova, O.; Kassemi, M.; Agui, J.; Moder, J.

    2014-01-01

    This paper presents a CFD (computational fluid dynamics) model for simulating the self-pressurization of a large scale liquid hydrogen storage tank. In this model, the kinetics-based Schrage equation is used to account for the evaporative and condensing interfacial mass flows. Laminar and turbulent approaches to modeling natural convection in the tank and heat and mass transfer at the interface are compared. The flow, temperature, and interfacial mass fluxes predicted by these two approaches during tank self-pressurization are compared against each other. The ullage pressure and vapor temperature evolutions are also compared against experimental data obtained from the MHTB (Multipuprpose Hydrogen Test Bed) self-pressurization experiment. A CFD model for cooling cryogenic storage tanks by spraying cold liquid in the ullage is also presented. The Euler- Lagrange approach is utilized for tracking the spray droplets and for modeling interaction between the droplets and the continuous phase (ullage). The spray model is coupled with the VOF (volume of fluid) model by performing particle tracking in the ullage, removing particles from the ullage when they reach the interface, and then adding their contributions to the liquid. Droplet ullage heat and mass transfer are modeled. The flow, temperature, and interfacial mass flux predicted by the model are presented. The ullage pressure is compared with experimental data obtained from the MHTB spray bar mixing experiment. The results of the models with only droplet/ullage heat transfer and with heat and mass transfer between the droplets and ullage are compared.

  7. Comparison of the CEAS and Williams-type barley yield models for North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Leduc, S. (Principal Investigator)

    1982-01-01

    The CEAS and Williams type models were compared based on specified selection criteria which includes a ten year bootstrap test (1970-1979). Based on this, the models were quite comparable; however, the CEAS model was slightly better overall. The Williams type model seemed better for the 1974 estimates. Because that year spring wheat yield was particularly low, the Williams type model should not be excluded from further consideration.

  8. The use of discrete-event simulation modeling to compare handwritten and electronic prescribing systems.

    PubMed

    Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim

    2013-01-01

    Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.

  9. Which is the better forecasting model? A comparison between HAR-RV and multifractality volatility

    NASA Astrophysics Data System (ADS)

    Ma, Feng; Wei, Yu; Huang, Dengshi; Chen, Yixiang

    2014-07-01

    In this paper, by taking the 5-min high frequency data of the Shanghai Composite Index as example, we compare the forecasting performance of HAR-RV and Multifractal volatility, Realized volatility, Realized Bipower Variation and their corresponding short memory model with rolling windows forecasting method and the Model Confidence Set which is proved superior to SPA test. The empirical results show that, for six loss functions, HAR-RV outperforms other models. Moreover, to make the conclusions more precise and robust, we use the MCS test to compare the performance of their logarithms form models, and find that the HAR-log(RV) has a better performance in predicting future volatility. Furthermore, by comparing the two models of HAR-RV and HAR-log(RV), we conclude that, in terms of performance forecasting, the HAR-log(RV) model is the best model among models we have discussed in this paper.

  10. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  11. Perception of differences in naturalistic dynamic scenes, and a V1-based model.

    PubMed

    To, Michelle P S; Gilchrist, Iain D; Tolhurst, David J

    2015-01-16

    We investigate whether a computational model of V1 can predict how observers rate perceptual differences between paired movie clips of natural scenes. Observers viewed 198 pairs of movies clips, rating how different the two clips appeared to them on a magnitude scale. Sixty-six of the movie pairs were naturalistic and those remaining were low-pass or high-pass spatially filtered versions of those originals. We examined three ways of comparing a movie pair. The Spatial Model compared corresponding frames between each movie pairwise, combining those differences using Minkowski summation. The Temporal Model compared successive frames within each movie, summed those differences for each movie, and then compared the overall differences between the paired movies. The Ordered-Temporal Model combined elements from both models, and yielded the single strongest predictions of observers' ratings. We modeled naturalistic sustained and transient impulse functions and compared frames directly with no temporal filtering. Overall, modeling naturalistic temporal filtering improved the models' performance; in particular, the predictions of the ratings for low-pass spatially filtered movies were much improved by employing a transient impulse function. The correlations between model predictions and observers' ratings rose from 0.507 without temporal filtering to 0.759 (p = 0.01%) when realistic impulses were included. The sustained impulse function and the Spatial Model carried more weight in ratings for normal and high-pass movies, whereas the transient impulse function with the Ordered-Temporal Model was most important for spatially low-pass movies. This is consistent with models in which high spatial frequency channels with sustained responses primarily code for spatial details in movies, while low spatial frequency channels with transient responses code for dynamic events. © 2015 ARVO.

  12. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  13. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Treesearch

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  14. Comparative Study Of Four Models Of Turbulence

    NASA Technical Reports Server (NTRS)

    Menter, Florian R.

    1996-01-01

    Report presents comparative study of four popular eddy-viscosity models of turbulence. Computations reported for three different adverse pressure-gradient flowfields. Detailed comparison of numerical results and experimental data given. Following models tested: Baldwin-Lomax, Johnson-King, Baldwin-Barth, and Wilcox.

  15. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  16. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  17. A comparison of methods of fitting several models to nutritional response data.

    PubMed

    Vedenov, D; Pesti, G M

    2008-02-01

    A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.

  18. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches

    PubMed Central

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction. PMID:27379306

  19. A systematic review of the quality of economic models comparing thrombosis inhibitors in patients with acute coronary syndrome undergoing percutaneous coronary intervention.

    PubMed

    Hatz, Maximilian H M; Leidl, Reiner; Yates, Nichola A; Stollenwerk, Björn

    2014-04-01

    Thrombosis inhibitors can be used to treat acute coronary syndromes (ACS). However, there are various alternative treatment strategies, of which some have been compared using health economic decision models. To assess the quality of health economic decision models comparing thrombosis inhibitors in patients with ACS undergoing percutaneous coronary intervention, and to identify areas for quality improvement. The literature databases MEDLINE, EMBASE, EconLit, National Health Service Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA). A review of the quality of health economic decision models was conducted by two independent reviewers, using the Philips checklist. Twenty-one relevant studies were identified. Differences were apparent regarding the model type (six decision trees, four Markov models, eight combinations, three undefined models), the model structure (types of events, Markov states) and the incorporation of data (efficacy, cost and utility data). Critical issues were the absence of particular events (e.g. thrombocytopenia, stroke) and questionable usage of utility values within some studies. As we restricted our search to health economic decision models comparing thrombosis inhibitors, interesting aspects related to the quality of studies of adjacent medical areas that compared stents or procedures could have been missed. This review identified areas where recommendations are indicated regarding the quality of future ACS decision models. For example, all critical events and relevant treatment options should be included. Models also need to allow for changing event probabilities to correctly reflect ACS and to incorporate appropriate, age-specific utility values and decrements when conducting cost-utility analyses.

  20. Wellness Model of Supervision: A Comparative Analysis

    ERIC Educational Resources Information Center

    Lenz, A. Stephen; Sangganjanavanich, Varunee Faii; Balkin, Richard S.; Oliver, Marvarene; Smith, Robert L.

    2012-01-01

    This quasi-experimental study compared the effectiveness of the Wellness Model of Supervision (WELMS; Lenz & Smith, 2010) with alternative supervision models for developing wellness constructs, total personal wellness, and helping skills among counselors-in-training. Participants were 32 master's-level counseling students completing their…

  1. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  2. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  3. Runoff forecasting using a Takagi-Sugeno neuro-fuzzy model with online learning

    NASA Astrophysics Data System (ADS)

    Talei, Amin; Chua, Lloyd Hock Chye; Quek, Chai; Jansson, Per-Erik

    2013-04-01

    SummaryA study using local learning Neuro-Fuzzy System (NFS) was undertaken for a rainfall-runoff modeling application. The local learning model was first tested on three different catchments: an outdoor experimental catchment measuring 25 m2 (Catchment 1), a small urban catchment 5.6 km2 in size (Catchment 2), and a large rural watershed with area of 241.3 km2 (Catchment 3). The results obtained from the local learning model were comparable or better than results obtained from physically-based, i.e. Kinematic Wave Model (KWM), Storm Water Management Model (SWMM), and Hydrologiska Byråns Vattenbalansavdelning (HBV) model. The local learning algorithm also required a shorter training time compared to a global learning NFS model. The local learning model was next tested in real-time mode, where the model was continuously adapted when presented with current information in real time. The real-time implementation of the local learning model gave better results, without the need for retraining, when compared to a batch NFS model, where it was found that the batch model had to be retrained periodically in order to achieve similar results.

  4. Cost Modeling for Space Telescope

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2011-01-01

    Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.

  5. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  6. Cardiovascular risk prediction in HIV-infected patients: comparing the Framingham, atherosclerotic cardiovascular disease risk score (ASCVD), Systematic Coronary Risk Evaluation for the Netherlands (SCORE-NL) and Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) risk prediction models.

    PubMed

    Krikke, M; Hoogeveen, R C; Hoepelman, A I M; Visseren, F L J; Arends, J E

    2016-04-01

    The aim of the study was to compare the predictions of five popular cardiovascular disease (CVD) risk prediction models, namely the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) model, the Framingham Heart Study (FHS) coronary heart disease (FHS-CHD) and general CVD (FHS-CVD) models, the American Heart Association (AHA) atherosclerotic cardiovascular disease risk score (ASCVD) model and the Systematic Coronary Risk Evaluation for the Netherlands (SCORE-NL) model. A cross-sectional design was used to compare the cumulative CVD risk predictions of the models. Furthermore, the predictions of the general CVD models were compared with those of the HIV-specific D:A:D model using three categories (< 10%, 10-20% and > 20%) to categorize the risk and to determine the degree to which patients were categorized similarly or in a higher/lower category. A total of 997 HIV-infected patients were included in the study: 81% were male and they had a median age of 46 [interquartile range (IQR) 40-52] years, a known duration of HIV infection of 6.8 (IQR 3.7-10.9) years, and a median time on ART of 6.4 (IQR 3.0-11.5) years. The D:A:D, ASCVD and SCORE-NL models gave a lower cumulative CVD risk, compared with that of the FHS-CVD and FHS-CHD models. Comparing the general CVD models with the D:A:D model, the FHS-CVD and FHS-CHD models only classified 65% and 79% of patients, respectively, in the same category as did the D:A:D model. However, for the ASCVD and SCORE-NL models, this percentage was 89% and 87%, respectively. Furthermore, FHS-CVD and FHS-CHD attributed a higher CVD risk to 33% and 16% of patients, respectively, while this percentage was < 6% for ASCVD and SCORE-NL. When using FHS-CVD and FHS-CHD, a higher overall CVD risk was attributed to the HIV-infected patients than when using the D:A:D, ASCVD and SCORE-NL models. This could have consequences regarding overtreatment, drug-related adverse events and drug-drug interactions. © 2015 British HIV Association.

  7. A comparative study of count models: application to pedestrian-vehicle crashes along Malaysia federal roads.

    PubMed

    Hosseinpour, Mehdi; Pour, Mehdi Hossein; Prasetijo, Joewono; Yahaya, Ahmad Shukri; Ghadiri, Seyed Mohammad Reza

    2013-01-01

    The objective of this study was to examine the effects of various roadway characteristics on the incidence of pedestrian-vehicle crashes by developing a set of crash prediction models on 543 km of Malaysia federal roads over a 4-year time span between 2007 and 2010. Four count models including the Poisson, negative binomial (NB), hurdle Poisson (HP), and hurdle negative binomial (HNB) models were developed and compared to model the number of pedestrian crashes. The results indicated the presence of overdispersion in the pedestrian crashes (PCs) and showed that it is due to excess zero rather than variability in the crash data. To handle the issue, the hurdle Poisson model was found to be the best model among the considered models in terms of comparative measures. Moreover, the variables average daily traffic, heavy vehicle traffic, speed limit, land use, and area type were significantly associated with PCs.

  8. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning.

    PubMed

    Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru

    2018-04-17

    We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.

  9. A proposed model for economic evaluations of major depressive disorder.

    PubMed

    Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi

    2012-08-01

    In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.

  10. Policy Research Challenges in Comparing Care Models for Dual-Eligible Beneficiaries.

    PubMed

    Van Cleave, Janet H; Egleston, Brian L; Brosch, Sarah; Wirth, Elizabeth; Lawson, Molly; Sullivan-Marx, Eileen M; Naylor, Mary D

    2017-05-01

    Providing affordable, high-quality care for the 10 million persons who are dual-eligible beneficiaries of Medicare and Medicaid is an ongoing health-care policy challenge in the United States. However, the workforce and the care provided to dual-eligible beneficiaries are understudied. The purpose of this article is to provide a narrative of the challenges and lessons learned from an exploratory study in the use of clinical and administrative data to compare the workforce of two care models that deliver home- and community-based services to dual-eligible beneficiaries. The research challenges that the study team encountered were as follows: (a) comparing different care models, (b) standardizing data across care models, and (c) comparing patterns of health-care utilization. The methods used to meet these challenges included expert opinion to classify data and summative content analysis to compare and count data. Using descriptive statistics, a summary comparison of the two care models suggested that the coordinated care model workforce provided significantly greater hours of care per recipient than the integrated care model workforce. This likely represented the coordinated care model's focus on providing in-home services for one recipient, whereas the integrated care model focused on providing services in a day center with group activities. The lesson learned from this exploratory study is the need for standardized quality measures across home- and community-based services agencies to determine the workforce that best meets the needs of dual-eligible beneficiaries.

  11. Biomechanical evaluation of implant-supported prosthesis with various tilting implant angles and bone types in atrophic maxilla: A finite element study.

    PubMed

    Gümrükçü, Zeynep; Korkmaz, Yavuz Tolga; Korkmaz, Fatih Mehmet

    2017-07-01

    The purpose of this study is to evaluate and compare bone stress that occurs as a result of using vertical implants with simultaneous sinus augmentation with bone stress generated from oblique implants without sinus augmentation in atrophic maxilla. Six, three-dimensional (3D) finite element (FE) models of atrophic maxilla were generated with SolidWorks software. The maxilla models were varied for two different bone types. Models 2a, 2b and 2c represent maxilla models with D2 bone type. Models 3a, 3b and 3c represent maxilla models with D3 bone type. Five implants were embedded in each model with different configurations for vertical implant insertion with sinus augmentation: Model 2a/Model 3a, 30° tilted insertion; Model 2b/Model 3b and 45° tilted insertion; Model 2c/Model 3c. A 150 N load was applied obliquely on the hybrid prosthesis. The maximum von Mises stress values were comparatively evaluated using color scales. The von Mises stress values predicted by the FE models were higher for all D3 bone models in both cortical and cancellous bone. For the vertical implant models, lower stress values were found in cortical bone. Tilting of the distal implants by 30° increased the stress in the cortical layer compared to vertical implant models. Tilting of the distal implant by 45° decreased the stress in the cortical bone compared to the 30° models, but higher stress values were detected in the 45° models compared to the vertical implant models. Augmentation should be the first treatment option in atrophic maxilla in terms of biomechanics. Tilted posterior implants can create higher stress values than vertical posterior implants. During tilting implant planning, the use of a 45° tilted implant results in better biomechanical performance in peri-implant bone than 30° tilted implant due to the decrease in cantilever length. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Simulation skill of APCC set of global climate models for Asian summer monsoon rainfall variability

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Singh, G. P.; Singh, Vikas

    2015-04-01

    The performance of 11 Asia-Pacific Economic Cooperation Climate Center (APCC) global climate models (coupled and uncoupled both) in simulating the seasonal summer (June-August) monsoon rainfall variability over Asia (especially over India and East Asia) has been evaluated in detail using hind-cast data (3 months advance) generated from APCC which provides the regional climate information product services based on multi-model ensemble dynamical seasonal prediction systems. The skill of each global climate model over Asia was tested separately in detail for the period of 21 years (1983-2003), and simulated Asian summer monsoon rainfall (ASMR) has been verified using various statistical measures for Indian and East Asian land masses separately. The analysis found a large variation in spatial ASMR simulated with uncoupled model compared to coupled models (like Predictive Ocean Atmosphere Model for Australia, National Centers for Environmental Prediction and Japan Meteorological Agency). The simulated ASMR in coupled model was closer to Climate Prediction Centre Merged Analysis of Precipitation (CMAP) compared to uncoupled models although the amount of ASMR was underestimated in both models. Analysis also found a high spread in simulated ASMR among the ensemble members (suggesting that the model's performance is highly dependent on its initial conditions). The correlation analysis between sea surface temperature (SST) and ASMR shows that that the coupled models are strongly associated with ASMR compared to the uncoupled models (suggesting that air-sea interaction is well cared in coupled models). The analysis of rainfall using various statistical measures suggests that the multi-model ensemble (MME) performed better compared to individual model and also separate study indicate that Indian and East Asian land masses are more useful compared to Asia monsoon rainfall as a whole. The results of various statistical measures like skill of multi-model ensemble, large spread among the ensemble members of individual model, strong teleconnection (correlation analysis) with SST, coefficient of variation, inter-annual variability, analysis of Taylor diagram, etc. suggest that there is a need to improve coupled model instead of uncoupled model for the development of a better dynamical seasonal forecast system.

  13. Verification of a computational cardiovascular system model comparing the hemodynamics of a continuous flow to a synchronous valveless pulsatile flow left ventricular assist device.

    PubMed

    Gohean, Jeffrey R; George, Mitchell J; Pate, Thomas D; Kurusz, Mark; Longoria, Raul G; Smalling, Richard W

    2013-01-01

    The purpose of this investigation is to use a computational model to compare a synchronized valveless pulsatile left ventricular assist device with continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate the support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous pulsatile valveless dual-piston positive displacement pump. These results were compared with measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared with the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device.

  14. Verification of a computational cardiovascular system model comparing the hemodynamics of a continuous flow to a synchronous valveless pulsatile flow left ventricular assist device

    PubMed Central

    Gohean, Jeffrey R.; George, Mitchell J.; Pate, Thomas D.; Kurusz, Mark; Longoria, Raul G.; Smalling, Richard W.

    2012-01-01

    The purpose of this investigation is to utilize a computational model to compare a synchronized valveless pulsatile left ventricular assist device to continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous, pulsatile, valveless, dual piston positive displacement pump. These results were compared to measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared to the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device. PMID:23438771

  15. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models.

    PubMed

    Beard, Brian B; Kainz, Wolfgang

    2004-10-13

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.

  16. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models

    PubMed Central

    Beard, Brian B; Kainz, Wolfgang

    2004-01-01

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601

  17. Comparison between sparsely distributed memory and Hopfield-type neural network models

    NASA Technical Reports Server (NTRS)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  18. Modeling Macrosegregation in Directionally Solidified Aluminum Alloys under Gravitational and Microgravitational Conditions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lauer, Mark A.; Poirier, David R.; Erdmann, Robert G.

    2014-09-01

    This report covers the modeling of seven directionally solidified samples, five under normal gravitational conditions and two in microgravity. A model is presented to predict macrosegregation during the melting phases of samples solidified under microgravitational conditions. The results of this model are compared against two samples processed in microgravity and good agreement is found. A second model is presented that captures thermosolutal convection during directional solidification. Results for this model are compared across several experiments and quantitative comparisons are made between the model and the experimentally obtained radial macrosegregation profiles with good agreement being found. Changes in cross section weremore » present in some samples and micrographs of these are qualitatively compared with the results of the simulations. It is found that macrosegregation patterns can be affected by changing the mold material.« less

  19. Spread of large LNG pools on the sea.

    PubMed

    Fay, J A

    2007-02-20

    A review of the standard model of LNG pool spreading on water, comparing it with the model and experiments on oil pool spread from which the LNG model is extrapolated, raises questions about the validity of the former as applied to spills from marine tankers. These questions arise from the difference in fluid density ratios, in the multi-dimensional flow at the pool edge, in the effects of LNG pool boiling at the LNG-water interface, and in the model and experimental initial conditions compared with the inflow conditions from a marine tanker spill. An alternate supercritical flow model is proposed that avoids these difficulties; it predicts significant increase in the maximum pool radius compared with the standard model and is partially corroborated by tests of LNG pool fires on water. Wind driven ocean wave interaction has little effect on either spread model.

  20. Thermal Pollution Mathematical Model. Volume 4: Verification of Three-Dimensional Rigid-Lid Model at Lake Keowee. [envrionment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1980-01-01

    The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.

  1. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  2. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    NASA Astrophysics Data System (ADS)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  3. Comparing Models of Intelligence in Project TALENT: The VPR Model Fits Better than the CHC and Extended Gf-Gc Models

    ERIC Educational Resources Information Center

    Major, Jason T.; Johnson, Wendy; Deary, Ian J.

    2012-01-01

    Three prominent theories of intelligence, the Cattell-Horn-Carroll (CHC), extended fluid-crystallized (Gf-Gc) and verbal-perceptual-image rotation (VPR) theories, provide differing descriptions of the structure of intelligence (McGrew, 2009; Horn & Blankson, 2005; Johnson & Bouchard, 2005b). To compare these theories, models representing them were…

  4. Comparing Construct Definition in the Angoff and Objective Standard Setting Models: Playing in a House of Cards without a Full Deck

    ERIC Educational Resources Information Center

    Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A.

    2011-01-01

    Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…

  5. First-Grade Teachers' Response to Three Models of Professional Development in Reading

    ERIC Educational Resources Information Center

    Carlisle, Joanne F.; Cortina, Kai Schnabel; Katz, Lauren A.

    2011-01-01

    The purpose of this study was to compare 1st-grade teachers' responses to professional development (PD) programs in reading that differed in means and degree of support for teachers' learning and efforts to improve their reading instruction. We compared 3 models of PD: the 1st model provided only seminars for the teachers, the 2nd model provided…

  6. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  7. Revisiting Fixed- and Random-Effects Models: Some Considerations for Policy-Relevant Education Research

    ERIC Educational Resources Information Center

    Clarke, Paul; Crawford, Claire; Steele, Fiona; Vignoles, Anna

    2015-01-01

    The use of fixed (FE) and random effects (RE) in two-level hierarchical linear regression is discussed in the context of education research. We compare the robustness of FE models with the modelling flexibility and potential efficiency of those from RE models. We argue that the two should be seen as complementary approaches. We then compare both…

  8. Comparative Reannotation of 21 Aspergillus Genomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salamov, Asaf; Riley, Robert; Kuo, Alan

    2013-03-08

    We used comparative gene modeling to reannotate 21 Aspergillus genomes. Initial automatic annotation of individual genomes may contain some errors of different nature, e.g. missing genes, incorrect exon-intron structures, 'chimeras', which fuse 2 or more real genes or alternatively splitting some real genes into 2 or more models. The main premise behind the comparative modeling approach is that for closely related genomes most orthologous families have the same conserved gene structure. The algorithm maps all gene models predicted in each individual Aspergillus genome to the other genomes and, for each locus, selects from potentially many competing models, the one whichmore » most closely resembles the orthologous genes from other genomes. This procedure is iterated until no further change in gene models is observed. For Aspergillus genomes we predicted in total 4503 new gene models ( ~;;2percent per genome), supported by comparative analysis, additionally correcting ~;;18percent of old gene models. This resulted in a total of 4065 more genes with annotated PFAM domains (~;;3percent increase per genome). Analysis of a few genomes with EST/transcriptomics data shows that the new annotation sets also have a higher number of EST-supported splice sites at exon-intron boundaries.« less

  9. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    PubMed

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  10. Comparing Habitat Suitability and Connectivity Modeling Methods for Conserving Pronghorn Migrations

    PubMed Central

    Poor, Erin E.; Loucks, Colby; Jakes, Andrew; Urban, Dean L.

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements. PMID:23166656

  11. Comparing habitat suitability and connectivity modeling methods for conserving pronghorn migrations.

    PubMed

    Poor, Erin E; Loucks, Colby; Jakes, Andrew; Urban, Dean L

    2012-01-01

    Terrestrial long-distance migrations are declining globally: in North America, nearly 75% have been lost. Yet there has been limited research comparing habitat suitability and connectivity models to identify migration corridors across increasingly fragmented landscapes. Here we use pronghorn (Antilocapra americana) migrations in prairie habitat to compare two types of models that identify habitat suitability: maximum entropy (Maxent) and expert-based (Analytic Hierarchy Process). We used distance to wells, distance to water, NDVI, land cover, distance to roads, terrain shape and fence presence to parameterize the models. We then used the output of these models as cost surfaces to compare two common connectivity models, least-cost modeling (LCM) and circuit theory. Using pronghorn movement data from spring and fall migrations, we identified potential migration corridors by combining each habitat suitability model with each connectivity model. The best performing model combination was Maxent with LCM corridors across both seasons. Maxent out-performed expert-based habitat suitability models for both spring and fall migrations. However, expert-based corridors can perform relatively well and are a cost-effective alternative if species location data are unavailable. Corridors created using LCM out-performed circuit theory, as measured by the number of pronghorn GPS locations present within the corridors. We suggest the use of a tiered approach using different corridor widths for prioritizing conservation and mitigation actions, such as fence removal or conservation easements.

  12. Model Ambiguities in Configurational Comparative Research

    ERIC Educational Resources Information Center

    Baumgartner, Michael; Thiem, Alrik

    2017-01-01

    For many years, sociologists, political scientists, and management scholars have readily relied on Qualitative Comparative Analysis (QCA) for the purpose of configurational causal modeling. However, this article reveals that a severe problem in the application of QCA has gone unnoticed so far: model ambiguities. These arise when multiple causal…

  13. SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS

    EPA Science Inventory

    A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...

  14. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  15. 75 FR 64909 - Fiduciary Requirements for Disclosure in Participant-Directed Individual Account Plans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    ...- related information in a form that encourages and facilitates a comparative review among a plan's... alternatives, and, specifically, how participants would react to the Model Comparative Chart for plan... regulation and Model Comparative Chart. Set forth below is an overview of the final regulations and a...

  16. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A utility-based design for randomized comparative trials with ordinal outcomes and prognostic subgroups.

    PubMed

    Murray, Thomas A; Yuan, Ying; Thall, Peter F; Elizondo, Joan H; Hofstetter, Wayne L

    2018-01-22

    A design is proposed for randomized comparative trials with ordinal outcomes and prognostic subgroups. The design accounts for patient heterogeneity by allowing possibly different comparative conclusions within subgroups. The comparative testing criterion is based on utilities for the levels of the ordinal outcome and a Bayesian probability model. Designs based on two alternative models that include treatment-subgroup interactions are considered, the proportional odds model and a non-proportional odds model with a hierarchical prior that shrinks toward the proportional odds model. A third design that assumes homogeneity and ignores possible treatment-subgroup interactions also is considered. The three approaches are applied to construct group sequential designs for a trial of nutritional prehabilitation versus standard of care for esophageal cancer patients undergoing chemoradiation and surgery, including both untreated patients and salvage patients whose disease has recurred following previous therapy. A simulation study is presented that compares the three designs, including evaluation of within-subgroup type I and II error probabilities under a variety of scenarios including different combinations of treatment-subgroup interactions. © 2018, The International Biometric Society.

  18. Glycogen storage disease type Ia in canines: a model for human metabolic and genetic liver disease.

    PubMed

    Specht, Andrew; Fiske, Laurie; Erger, Kirsten; Cossette, Travis; Verstegen, John; Campbell-Thompson, Martha; Struck, Maggie B; Lee, Young Mok; Chou, Janice Y; Byrne, Barry J; Correia, Catherine E; Mah, Cathryn S; Weinstein, David A; Conlon, Thomas J

    2011-01-01

    A canine model of Glycogen storage disease type Ia (GSDIa) is described. Affected dogs are homozygous for a previously described M121I mutation resulting in a deficiency of glucose-6-phosphatase-α. Metabolic, clinicopathologic, pathologic, and clinical manifestations of GSDIa observed in this model are described and compared to those observed in humans. The canine model shows more complete recapitulation of the clinical manifestations seen in humans including "lactic acidosis", larger size, and longer lifespan compared to other animal models. Use of this model in preclinical trials of gene therapy is described and briefly compared to the murine model. Although the canine model offers a number of advantages for evaluating potential therapies for GSDIa, there are also some significant challenges involved in its use. Despite these challenges, the canine model of GSDIa should continue to provide valuable information about the potential for generating curative therapies for GSDIa as well as other genetic hepatic diseases.

  19. Feasibility of quasi-random band model in evaluating atmospheric radiance

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Mirakhur, N.

    1980-01-01

    The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.

  20. COMPARING THE UTILITY OF MULTIMEDIA MODELS FOR HUMAN AND ECOLOGICAL EXPOSURE ANALYSIS: TWO CASES

    EPA Science Inventory

    A number of models are available for exposure assessment; however, few are used as tools for both human and ecosystem risks. This discussion will consider two modeling frameworks that have recently been used to support human and ecological decision making. The study will compare ...

  1. The Comparative Accuracy of Two Hydrologic Models in Simulating Warm-Season Runoff for Two Small, Hillslope Catchments

    EPA Science Inventory

    Runoff prediction is a cornerstone of water resources planning, and therefore modeling performance is a key issue. This paper investigates the comparative advantages of conceptual versus process- based models in predicting warm season runoff for upland, low-yield micro-catchments...

  2. SU-E-J-192: Comparative Effect of Different Respiratory Motion Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Y; Kadoya, N; Ito, K

    Purpose: Irregular breathing can influence the outcome of four-dimensional computed tomography imaging for causing artifacts. Audio-visual biofeedback systems associated with patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches), representing simpler visual coaching techniques without guiding waveform are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching to reduce respiratory irregularities by comparing two respiratory management systems. Methods: We collected data from eleven healthy volunteers. Bar and wave models were used as audio-visual biofeedback systems. Abches consisted of a respiratorymore » indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. Results: All coaching techniques improved respiratory variation, compared to free breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86, and 0.98 ± 0.47 mm for free breathing, Abches, bar model, and wave model, respectively. Free breathing and wave model differed significantly (p < 0.05). Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18, and 0.17 ± 0.05 s for free breathing, Abches, bar model, and wave model, respectively. Free breathing and all coaching techniques differed significantly (p < 0.05). For variation in both displacement and period, wave model was superior to free breathing, bar model, and Abches. The average reduction in displacement and period RMSE compared with wave model were 27% and 47%, respectively. Conclusion: The efficacy of audio-visual biofeedback to reduce respiratory irregularity compared with Abches. Our results showed that audio-visual biofeedback combined with a wave model can potentially provide clinical benefits in respiratory management, although all techniques could reduce respiratory irregularities.« less

  3. Bayesian evidences for dark energy models in light of current observational data

    NASA Astrophysics Data System (ADS)

    Lonappan, Anto. I.; Kumar, Sumit; Ruchika; Dinda, Bikash R.; Sen, Anjan A.

    2018-02-01

    We do a comprehensive study of the Bayesian evidences for a large number of dark energy models using a combination of latest cosmological data from SNIa, CMB, BAO, strong lensing time delay, growth measurements, measurements of Hubble parameter at different redshifts and measurements of angular diameter distance by Megamaser Cosmology Project. We consider a variety of scalar field models with different potentials as well as different parametrizations for the dark energy equation of state. Among 21 models that we consider in our study, we do not find strong evidences in favor of any evolving dark energy model compared to Λ CDM . For the evolving dark energy models, we show that purely nonphantom models have much better evidences compared to those models that allow both phantom and nonphantom behaviors. Canonical scalar field with exponential and tachyon field with square potential have highest evidences among all the models considered in this work. We also show that a combination of low redshift measurements decisively favors an accelerating Λ CDM model compared to a nonaccelerating power law model.

  4. Analysis of random structure-acoustic interaction problems using coupled boundary element and finite element methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Pates, Carl S., III

    1994-01-01

    A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.

  5. Orthology for comparative genomics in the mouse genome database.

    PubMed

    Dolan, Mary E; Baldarelli, Richard M; Bello, Susan M; Ni, Li; McAndrews, Monica S; Bult, Carol J; Kadin, James A; Richardson, Joel E; Ringwald, Martin; Eppig, Janan T; Blake, Judith A

    2015-08-01

    The mouse genome database (MGD) is the model organism database component of the mouse genome informatics system at The Jackson Laboratory. MGD is the international data resource for the laboratory mouse and facilitates the use of mice in the study of human health and disease. Since its beginnings, MGD has included comparative genomics data with a particular focus on human-mouse orthology, an essential component of the use of mouse as a model organism. Over the past 25 years, novel algorithms and addition of orthologs from other model organisms have enriched comparative genomics in MGD data, extending the use of orthology data to support the laboratory mouse as a model of human biology. Here, we describe current comparative data in MGD and review the history and refinement of orthology representation in this resource.

  6. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    PubMed

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  7. A 2D flood inundation model based on cellular automata approach

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Todini, Ezio

    2010-05-01

    In the past years, the cellular automata approach has been successfully applied in two-dimensional modelling of flood events. When used in experimental applications, models based on such approach have provided good results, comparable to those obtained with more complex 2D models; moreover, CA models have proven significantly faster and easier to apply than most of existing models, and these features make them a valuable tool for flood analysis especially when dealing with large areas. However, to date the real degree of accuracy of such models has not been demonstrated, since they have been mainly used in experimental applications, while very few comparisons with theoretical solutions have been made. Also, the use of an explicit scheme of solution, which is inherent in cellular automata models, forces them to work only with small time steps, thus reducing model computation speed. The present work describes a cellular automata model based on the continuity and diffusive wave equations. Several model versions based on different solution schemes have been realized and tested in a number of numerical cases, both 1D and 2D, comparing the results with theoretical and numerical solutions. In all cases, the model performed well compared to the reference solutions, and proved to be both stable and accurate. Finally, the version providing the best results in terms of stability was tested in a real flood event and compared with different hydraulic models. Again, the cellular automata model provided very good results, both in term of computational speed and reproduction of the simulated event.

  8. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  9. Comparing basal area growth models, consistency of parameters, and accuracy of prediction

    Treesearch

    J.J. Colbert; Michael Schuckers; Desta Fekedulegn

    2002-01-01

    We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...

  10. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    ERIC Educational Resources Information Center

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  11. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    ERIC Educational Resources Information Center

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  12. Properties of young massive clusters obtained with different massive-star evolutionary models

    NASA Astrophysics Data System (ADS)

    Wofford, Aida; Charlot, Stéphane

    We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.

  13. An Airborne Radar Model For Non-Uniformly Spaced Antenna Arrays

    DTIC Science & Technology

    2006-03-01

    Department of Defense, or the United States Government . AFIT-GE-ENG-06-58 An Airborne Radar Model For Non-Uniformly Spaced Antenna Arrays THESIS Presented...different circular arrays, one containing 24 elements and one containing 15 elements. The circular array per- formance is compared to that of a 6 × 6...model and compared to the radar model of [5, 6, 13]. The two models are mathematically equivalent when the uniformly spaced array is linear. The two

  14. Comparison of land-surface humidity between observations and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Dunn, Robert; Willett, Kate; Ciavarella, Andrew; Stott, Peter; Jones, Gareth

    2017-04-01

    We compare the latest observational land-surface humidity dataset, HadISDH, with the CMIP5 model archive spatially and temporally over the period 1973-2015. None of the CMIP5 models or experiments capture the observed temporal behaviour of the globally averaged relative or specific humidity over the entire study period. When using an atmosphere-only model, driven by observed sea-surface temperatures and radiative forcing changes, the behaviour of regional average temperature and specific humidity are better captured, but there is little improvement in the relative humidity. Comparing the observed and historical model climatologies show that the models are generally cooler everywhere, are drier and less saturated in the tropics and extra tropics, and have comparable moisture levels but are more saturated in the high latitudes. The spatial pattern of linear trends are relatively similar between the models and HadISDH for temperature and specific humidity, but there are large differences for relative humidity, with less moistening shown in the models over the Tropics, and very little at high atitudes. The observed temporal behaviour appears to be a robust climate feature rather than observational error. It has been previously documented and is theoretically consistent with faster warming rates over land compared to oceans. Thus, the poor replication in the models, especially in the atmosphere only model, leads to questions over future projections of impacts related to changes in surface relative humidity.

  15. Comparative analysis of bleeding risk by the location and shape of arachnoid cysts: a finite element model analysis.

    PubMed

    Lee, Chang-Hyun; Han, In Seok; Lee, Ji Yeoun; Phi, Ji Hoon; Kim, Seung-Ki; Kim, Young-Eun; Wang, Kyu-Chang

    2017-01-01

    Although arachnoid cysts (ACs) are observed in various locations, only sylvian ACs are mainly regarded to be associated with bleeding. The reason for this selective association of sylvian ACs with bleeding is not understood well. This study is to investigate the effect of the location and shape of ACs on the risk of bleeding. A developed finite element model of the head/brain was modified for models of sylvian, suprasellar, and posterior fossa ACs. A spherical AC was placed at each location to compare the effect of AC location. Bowl-shaped and oval-shaped AC models were developed to compare the effect by shape. The shear force on the spot-weld elements (SFSW) was measured between the dura and the outer wall of the ACs or the comparable arachnoid membrane in the normal model. All AC models revealed higher SFSW than comparable normal models. By location, sylvian AC displayed the highest SFSW for frontal and lateral impacts. By shape, small outer wall AC models showed higher SFSW than large wall models in sylvian area and lower SFSW than large ones in posterior fossa. In regression analysis, the presence of AC was the only independent risk of bleeding. The bleeding mechanism of ACs is very complex, and the risk quantification failed to show a significant role of location and shape of ACs. The presence of AC increases shear force on impact condition and may be a risk factor of bleeding, and sylvian location of AC may not have additive risks of AC bleeding.

  16. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  17. Design-Tradeoff Model For Space Station

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Smith, Jeffrey L.; Borden, Chester S.; Deshpande, Govind K.; Fox, George; Duquette, William H.; Dilullo, Larry A.; Seeley, Larry; Shishko, Robert

    1990-01-01

    System Design Tradeoff Model (SDTM) computer program produces information which helps to enforce consistency of design objectives throughout system. Mathematical model of set of possible designs for Space Station Freedom. Program finds particular design enabling station to provide specified amounts of resources to users at lowest total (or life-cycle) cost. Compares alternative design concepts by changing set of possible designs, while holding specified services to users constant, and then comparing costs. Finally, both costs and services varied simultaneously when comparing different designs. Written in Turbo C 2.0.

  18. Emerging from the bottleneck: Benefits of the comparative approach to modern neuroscience

    PubMed Central

    Brenowitz, Eliot A.; Zakon, Harold H.

    2015-01-01

    Neuroscience historically exploited a wide diversity of animal taxa. Recently, however, research focused increasingly on a few model species. This trend accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. PMID:25800324

  19. Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches

    NASA Astrophysics Data System (ADS)

    Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael

    2011-09-01

    Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.

  20. Comparing the line broadened quasilinear model to Vlasov code

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  1. Differences Between the HUT Snow Emission Model and MEMLS and Their Effects on Brightness Temperature Simulation

    NASA Technical Reports Server (NTRS)

    Pan, Jinmei; Durand, Michael; Sandells, Melody; Lemmetyinen, Juha; Kim, Edward J.; Pulliainen, Jouni; Kontu, Anna; Derksen, Chris

    2015-01-01

    Microwave emission models are a critical component of snow water equivalent retrieval algorithms applied to passive microwave measurements. Several such emission models exist, but their differences need to be systematically compared. This paper compares the basic theories of two models: the multiple-layer HUT (Helsinki University of Technology) model and MEMLS (Microwave Emission Model of Layered Snowpacks). By comparing the mathematical formulation side-by-side, three major differences were identified: (1) by assuming the scattered intensity is mostly (96) in the forward direction, the HUT model simplifies the radiative transfer (RT) equation into 1-flux; whereas MEMLS uses a 2-flux theory; (2) the HUT scattering coefficient is much larger than MEMLS; (3 ) MEMLS considers the trapped radiation inside snow due to internal reflection by a 6-flux model, which is not included in HUT. Simulation experiments indicate that, the large scattering coefficient of the HUT model compensates for its large forward scattering ratio to some extent, but the effects of 1-flux simplification and the trapped radiation still result in different T(sub B) simulations between the HUT model and MEMLS. The models were compared with observations of natural snow cover at Sodankyl, Finland; Churchill, Canada; and Colorado, USA. No optimization of the snow grain size was performed. It shows that HUT model tends to under estimate T(sub B) for deep snow. MEMLS with the physically-based improved Born approximation performed best among the models, with a bias of -1.4 K, and an RMSE of 11.0 K.

  2. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    PubMed

    Thom, Howard; Jackson, Chris; Welton, Nicky; Sharples, Linda

    2017-09-01

    This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

  3. Mosaic anisotropy model for magnetic interactions in mesostructured crystals

    NASA Astrophysics Data System (ADS)

    Goldman, Abby R.; Asenath-Smith, Emily; Estroff, Lara A.

    2017-10-01

    We propose a new model for interpreting the magnetic interactions in crystals with mosaic texture called the mosaic anisotropy (MA) model. We test the MA model using hematite as a model system, comparing mosaic crystals to polycrystals, single crystal nanoparticles, and bulk single crystals. Vibrating sample magnetometry confirms the hypothesis of the MA model that mosaic crystals have larger remanence (Mr/Ms) and coercivity (Hc) compared to polycrystalline or bulk single crystals. By exploring the magnetic properties of mesostructured crystalline materials, we may be able to develop new routes to engineering harder magnets.

  4. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  5. Early bursts of body size and shape evolution are rare in comparative data.

    PubMed

    Harmon, Luke J; Losos, Jonathan B; Jonathan Davies, T; Gillespie, Rosemary G; Gittleman, John L; Bryan Jennings, W; Kozak, Kenneth H; McPeek, Mark A; Moreno-Roark, Franck; Near, Thomas J; Purvis, Andy; Ricklefs, Robert E; Schluter, Dolph; Schulte Ii, James A; Seehausen, Ole; Sidlauskas, Brian L; Torres-Carvajal, Omar; Weir, Jason T; Mooers, Arne Ø

    2010-08-01

    George Gaylord Simpson famously postulated that much of life's diversity originated as adaptive radiations-more or less simultaneous divergences of numerous lines from a single ancestral adaptive type. However, identifying adaptive radiations has proven difficult due to a lack of broad-scale comparative datasets. Here, we use phylogenetic comparative data on body size and shape in a diversity of animal clades to test a key model of adaptive radiation, in which initially rapid morphological evolution is followed by relative stasis. We compared the fit of this model to both single selective peak and random walk models. We found little support for the early-burst model of adaptive radiation, whereas both other models, particularly that of selective peaks, were commonly supported. In addition, we found that the net rate of morphological evolution varied inversely with clade age. The youngest clades appear to evolve most rapidly because long-term change typically does not attain the amount of divergence predicted from rates measured over short time scales. Across our entire analysis, the dominant pattern was one of constraints shaping evolution continually through time rather than rapid evolution followed by stasis. We suggest that the classical model of adaptive radiation, where morphological evolution is initially rapid and slows through time, may be rare in comparative data.

  6. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  8. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  9. Comparing five modelling techniques for predicting forest characteristics

    Treesearch

    Gretchen G. Moisen; Tracey S. Frescino

    2002-01-01

    Broad-scale maps of forest characteristics are needed throughout the United States for a wide variety of forest land management applications. Inexpensive maps can be produced by modelling forest class and structure variables collected in nationwide forest inventories as functions of satellite-based information. But little work has been directed at comparing modelling...

  10. Comparing Mapped Plot Estimators

    Treesearch

    Paul C. Van Deusen

    2006-01-01

    Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...

  11. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  12. Multilevel Modeling and Ordinary Least Squares Regression: How Comparable Are They?

    ERIC Educational Resources Information Center

    Huang, Francis L.

    2018-01-01

    Studies analyzing clustered data sets using both multilevel models (MLMs) and ordinary least squares (OLS) regression have generally concluded that resulting point estimates, but not the standard errors, are comparable with each other. However, the accuracy of the estimates of OLS models is important to consider, as several alternative techniques…

  13. Comparing Latent Structures of the Grade of Membership, Rasch, and Latent Class Models

    ERIC Educational Resources Information Center

    Erosheva, Elena A.

    2005-01-01

    This paper focuses on model interpretation issues and employs a geometric approach to compare the potential value of using the Grade of Membership (GoM) model in representing population heterogeneity. We consider population heterogeneity manifolds generated by letting subject specific parameters vary over their natural range, while keeping other…

  14. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    ERIC Educational Resources Information Center

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  15. Psychological Implications of Motherhood and Fatherhood in Midlife: Evidence from Sibling Models

    ERIC Educational Resources Information Center

    Pudrovska, Tetyana

    2008-01-01

    Using data from 4,744 full, twin, half-, adopted, and stepsiblings in the Wisconsin Longitudinal Study, I examine psychological consequences of motherhood and fatherhood in midlife. My analysis includes between-family models that compare individuals across families and within-family models comparing siblings from the same family to account for…

  16. Modeling vs. Coaching of Argumentation in a Case-Based Learning Environment.

    ERIC Educational Resources Information Center

    Li, Tiancheng; And Others

    The major purposes of this study are: (1) to investigate and compare the effectiveness of two instructional strategies, modeling and coaching on helping students to articulate and support their decisions in a case-based learning environment; (2) to compare the effectiveness of modeling and coaching on helping students address essential criteria in…

  17. STRUCTURAL ESTIMATES OF TREATMENT EFFECTS ON OUTCOMES USING RETROSPECTIVE DATA: AN APPLICATION TO DUCTAL CARCINOMA IN SITU

    PubMed Central

    Gold, Heather Taffet; Sorbero, Melony E. S.; Griggs, Jennifer J.; Do, Huong T.; Dick, Andrew W.

    2013-01-01

    Analysis of observational cohort data is subject to bias from unobservable risk selection. We compared econometric models and treatment effectiveness estimates using the linked Surveillance, Epidemiology, and End Results (SEER)-Medicare claims data for women diagnosed with ductal carcinoma in situ. Treatment effectiveness estimates for mastectomy and breast conserving surgery (BCS) with or without radiotherapy were compared using three different models: simultaneous-equations model, discrete-time survival model with unobserved heterogeneity (frailty), and proportional hazards model. Overall trends in disease-free survival (DFS), or time to first subsequent breast event, by treatment are similar regardless of the model, with mastectomy yielding the highest DFS over 8 years of follow-up, followed by BCS with radiotherapy, and then BCS alone. Absolute rates and direction of bias varied substantially by treatment strategy. DFS was underestimated by single-equation and frailty models compared to the simultaneous-equations model and RCT results for BCS with RT and overestimated for BCS alone. PMID:21602195

  18. External validation of EPIWIN biodegradation models.

    PubMed

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  19. Updated Model of the Solar Energetic Proton Environment in Space

    NASA Astrophysics Data System (ADS)

    Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami

    2018-05-01

    The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).

  20. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence.

    PubMed

    Korkali, Mert; Veneman, Jason G; Tivnan, Brian F; Bagrow, James P; Hines, Paul D H

    2017-03-20

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a "smart" power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.

  1. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    NASA Astrophysics Data System (ADS)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.

    2017-03-01

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.

  2. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    PubMed Central

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.

    2017-01-01

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained. PMID:28317835

  3. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  4. The Prediction of Consumer Buying Intentions: A Comparative Study of the Predictive Efficacy of Two Attitudinal Models. Faculty Working Paper No. 234.

    ERIC Educational Resources Information Center

    Bhagat, Rabi S.; And Others

    The role of attitudes in the conduct of buyer behavior is examined in the context of two competitive models of attitude structure and attitude-behavior relationship. Specifically, the objectives of the study were to compare the Fishbein and Sheth models on the criteria of predictive as well as cross validities. Data on both the models were…

  5. Comparing Models of Helper Behavior to Actual Practice in Telephone Crisis Intervention: A Silent Monitoring Study of Calls to the U.S. 1-800-SUICIDE Network

    ERIC Educational Resources Information Center

    Mishara, Brian L.; Chagnon, Francois; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cecile; Campbell, Julie K.; Berman, Alan

    2007-01-01

    Models of telephone crisis intervention in suicide prevention and best practices were developed from a literature review and surveys of crisis centers. We monitored 2,611 calls to 14 centers using reliable behavioral ratings to compare actual interventions with the models. Active listening and collaborative problem-solving models describe help…

  6. An assessment of some theoretical models used for the calculation of the refractive index of InXGa1-xAs

    NASA Astrophysics Data System (ADS)

    Engelbrecht, J. A. A.

    2018-04-01

    Theoretical models used for the determination of the refractive index of InXGa1-XAs are reviewed and compared. Attention is drawn to some problems experienced with some of the models. Models also extended to the mid-infrared region of the electromagnetic spectrum. Theoretical results in the mid-infrared region are then compared to previously published experimental results.

  7. A Comparative Analysis of Financial Reporting Models for Private and Public Sector Organizations.

    DTIC Science & Technology

    1995-12-01

    The objective of this thesis was to describe and compare different existing and evolving financial reporting models used in both the public and...private sector. To accomplish the objective, this thesis identified the existing financial reporting models for private sector business organizations...private sector nonprofit organizations, and state and local governments, as well as the evolving financial reporting model for the federal government

  8. Modeling willingness to pay for land conservation easements: treatment of zero and protest bids and application and policy implications

    Treesearch

    Seong-Hoon Cho; Steven T. Yen; J. Michael Bowker; David H. Newman

    2008-01-01

    This study compares an ordered probit model and a Tobit model with selection to take into account both true zero and protest zero bids while estimating the willingness to pay (WTP) for conservation easements in Macon County, NC. By comparing the two models, the ordered/Unordered selection issue of the protest responses is analyzed to demonstrate how the treatment of...

  9. DIDEM - An integrated model for comparative health damage costs calculation of air pollution

    NASA Astrophysics Data System (ADS)

    Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara

    2018-01-01

    Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.

  10. A comparative study of approaches to compute the field distribution of deep brain stimulation in the Hemiparkinson rat model.

    PubMed

    Bohme, Andrea; van Rienen, Ursula

    2016-08-01

    Computational modeling of the stimulating field distribution during Deep Brain Stimulation provides an opportunity to advance our knowledge of this neurosurgical therapy for Parkinson's disease. There exist several approaches to model the target region for Deep Brain Stimulation in Hemi-parkinson Rats with volume conductor models. We have described and compared the normalized mapping approach as well as the modeling with three-dimensional structures, which include curvilinear coordinates to assure an anatomically realistic conductivity tensor orientation.

  11. 3D CFD Quantification of the Performance of a Multi-Megawatt Wind Turbine

    NASA Astrophysics Data System (ADS)

    Laursen, J.; Enevoldsen, P.; Hjort, S.

    2007-07-01

    This paper presents the results of 3D CFD rotor computations of a Siemens SWT-2.3-93 variable speed wind turbine with 45m blades. In the paper CFD is applied to a rotor at stationary wind conditions without wind shear, using the commercial multi-purpose CFD-solvers ANSYS CFX 10.0 and 11.0. When comparing modelled mechanical effects with findings from other models and measurements, good agreement is obtained. Similarly the computed force distributions compare very well, whereas some discrepancies are found when comparing with an in-house BEM model. By applying the reduced axial velocity method the local angle of attack has been derived from the CFD solutions, and from this knowledge and the computed force distributions, local airfoil profile coefficients have been computed and compared to BEM airfoil coefficients. Finally, the transition model of Langtry and Menter is tested on the rotor, and the results are compared with the results from the fully turbulent setup.

  12. The role of empathy and emotional intelligence in nurses' communication attitudes using regression models and fuzzy-set qualitative comparative analysis models.

    PubMed

    Giménez-Espert, María Del Carmen; Prado-Gascó, Vicente Javier

    2018-03-01

    To analyse link between empathy and emotional intelligence as a predictor of nurses' attitudes towards communication while comparing the contribution of emotional aspects and attitudinal elements on potential behaviour. Nurses' attitudes towards communication, empathy and emotional intelligence are key skills for nurses involved in patient care. There are currently no studies analysing this link, and its investigation is needed because attitudes may influence communication behaviours. Correlational study. To attain this goal, self-reported instruments (attitudes towards communication of nurses, trait emotional intelligence (Trait Emotional Meta-Mood Scale) and Jefferson Scale of Nursing Empathy (Jefferson Scale Nursing Empathy) were collected from 460 nurses between September 2015-February 2016. Two different analytical methodologies were used: traditional regression models and fuzzy-set qualitative comparative analysis models. The results of the regression model suggest that cognitive dimensions of attitude are a significant and positive predictor of the behavioural dimension. The perspective-taking dimension of empathy and the emotional-clarity dimension of emotional intelligence were significant positive predictors of the dimensions of attitudes towards communication, except for the affective dimension (for which the association was negative). The results of the fuzzy-set qualitative comparative analysis models confirm that the combination of high levels of cognitive dimension of attitudes, perspective-taking and emotional clarity explained high levels of the behavioural dimension of attitude. Empathy and emotional intelligence are predictors of nurses' attitudes towards communication, and the cognitive dimension of attitude is a good predictor of the behavioural dimension of attitudes towards communication of nurses in both regression models and fuzzy-set qualitative comparative analysis. In general, the fuzzy-set qualitative comparative analysis models appear to be better predictors than the regression models are. To evaluate current practices, establish intervention strategies and evaluate their effectiveness. The evaluation of these variables and their relationships are important in creating a satisfied and sustainable workforce and improving quality of care and patient health. © 2018 John Wiley & Sons Ltd.

  13. A study of two subgrid-scale models and their effects on wake breakdown behind a wind turbine in uniform inflow

    NASA Astrophysics Data System (ADS)

    Martinez, Luis; Meneveau, Charles

    2014-11-01

    Large Eddy Simulations (LES) of the flow past a single wind turbine with uniform inflow have been performed. A goal of the simulations is to compare two turbulence subgrid-scale models and their effects in predicting the initial breakdown, transition and evolution of the wake behind the turbine. Prior works have often observed negligible sensitivities to subgrid-scale models. The flow is modeled using an in-house LES with pseudo-spectral discretization in horizontal planes and centered finite differencing in the vertical direction. Turbines are represented using the actuator line model. We compare the standard constant-coefficient Smagorinsky subgrid-scale model with the Lagrangian Scale Dependent Dynamic model (LSDM). The LSDM model predicts faster transition to turbulence in the wake, whereas the standard Smagorinsky model predicts significantly delayed transition. The specified Smagorinsky coefficient is larger than the dynamic one on average, increasing diffusion thus delaying transition. A second goal is to compare the resulting near-blade properties such as local aerodynamic forces from the LES with Blade Element Momentum Theory. Results will also be compared with those of the SOWFA package, the wind energy CFD framework from NREL. This work is supported by NSF (IGERT and IIA-1243482) and computations use XSEDE resources, and has benefitted from interactions with Dr. M. Churchfield of NREL.

  14. Comparative modeling without implicit sequence alignments.

    PubMed

    Kolinski, Andrzej; Gront, Dominik

    2007-10-01

    The number of known protein sequences is about thousand times larger than the number of experimentally solved 3D structures. For more than half of the protein sequences a close or distant structural analog could be identified. The key starting point in a classical comparative modeling is to generate the best possible sequence alignment with a template or templates. With decreasing sequence similarity, the number of errors in the alignments increases and these errors are the main causes of the decreasing accuracy of the molecular models generated. Here we propose a new approach to comparative modeling, which does not require the implicit alignment - the model building phase explores geometric, evolutionary and physical properties of a template (or templates). The proposed method requires prior identification of a template, although the initial sequence alignment is ignored. The model is built using a very efficient reduced representation search engine CABS to find the best possible superposition of the query protein onto the template represented as a 3D multi-featured scaffold. The criteria used include: sequence similarity, predicted secondary structure consistency, local geometric features and hydrophobicity profile. For more difficult cases, the new method qualitatively outperforms existing schemes of comparative modeling. The algorithm unifies de novo modeling, 3D threading and sequence-based methods. The main idea is general and could be easily combined with other efficient modeling tools as Rosetta, UNRES and others.

  15. "Let's get physical": advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy.

    PubMed

    Preece, Daniel; Williams, Sarah B; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their comparative efficacies remains scarce in the literature. This study developed and evaluated the use of a physical model in demonstrating the complex spatial relationships of the equine foot. It was hypothesized that the newly developed physical model would be more effective for students to learn magnetic resonance imaging (MRI) anatomy of the foot than textbooks or computer-based 3D models. Third year veterinary medicine students were randomly assigned to one of three teaching aid groups (physical model; textbooks; 3D computer model). The comparative efficacies of the three teaching aids were assessed through students' abilities to identify anatomical structures on MR images. Overall mean MRI assessment scores were significantly higher in students utilizing the physical model (86.39%) compared with students using textbooks (62.61%) and the 3D computer model (63.68%) (P < 0.001), with no significant difference between the textbook and 3D computer model groups (P = 0.685). Student feedback was also more positive in the physical model group compared with both the textbook and 3D computer model groups. Our results suggest that physical models may hold a significant advantage over alternative learning resources in enhancing visuospatial and 3D understanding of complex anatomical architecture, and that 3D computer models have significant limitations with regards to 3D learning. © 2013 American Association of Anatomists.

  16. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality

    PubMed Central

    Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-01-01

    Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979

  17. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago

    PubMed Central

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E.

    2018-01-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns. PMID:29657753

  18. Advancing bioluminescence imaging technology for the evaluation of anticancer agents in the MDA-MB-435-HAL-Luc mammary fat pad and subrenal capsule tumor models.

    PubMed

    Zhang, Cathy; Yan, Zhengming; Arango, Maria E; Painter, Cory L; Anderes, Kenna

    2009-01-01

    Tumors grafted s.c. or under the mammary fat pad (MFP) rarely develop efficient metastasis. By applying bioluminescence imaging (BLI) technology, the MDA-MB-435-HAL-Luc subrenal capsule (SRC) model was compared with the MFP model for disease progression, metastatic potential, and response to therapy. The luciferase-expressing MDA-MB-435-HAL-Luc cell line was used in both MFP and SRC models. BLI technology allowed longitudinal assessment of disease progression and the therapeutic response to PD-0332991, Avastin, and docetaxel. Immunohistochemical analysis of Ki67 and CD31 staining in the primary tumors was compared in these models. Caliper measurement was used in the MFP model to validate the BLI quantification of primary tumors. The primary tumors in MDA-MB-435-HAL-Luc MFP and SRC models displayed comparable growth rates and vascularity. However, tumor-bearing mice in the SRC model developed lung metastases much earlier (4 weeks) than in the MFP model (>7 weeks), and the metastatic progression contributed significantly to the survival time. In the MFP model, BLI and caliper measurements were comparable for quantifying palpable tumors, but BLI offered an advantage for detecting the primary tumors that fell below a palpable threshold and for visualizing metastases. In the SRC model, BLI allowed longitudinal assessment of the antitumor and antimetastatic effects of PD-0332991, Avastin, and docetaxel, and the results correlated with the survival benefits of these agents. The MDA-MB-435-HAL-Luc SRC model and the MFP model displayed differences in disease progression. BLI is an innovative approach for developing animal models and creates opportunities for improving preclinical evaluations of anticancer agents.

  19. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago.

    PubMed

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E; Pellissier, Loïc

    2018-03-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns.

  20. Developing and testing a global-scale regression model to quantify mean annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  1. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  2. Application of thin-plate spline transformations to finite element models, or, how to turn a bog turtle into a spotted turtle to analyze both.

    PubMed

    Stayton, C Tristan

    2009-05-01

    Finite element (FE) models are popular tools that allow biologists to analyze the biomechanical behavior of complex anatomical structures. However, the expense and time required to create models from specimens has prevented comparative studies from involving large numbers of species. A new method is presented for transforming existing FE models using geometric morphometric methods. Homologous landmark coordinates are digitized on the FE model and on a target specimen into which the FE model is being transformed. These coordinates are used to create a thin-plate spline function and coefficients, which are then applied to every node in the FE model. This function smoothly interpolates the location of points between landmarks, transforming the geometry of the original model to match the target. This new FE model is then used as input in FE analyses. This procedure is demonstrated with turtle shells: a Glyptemys muhlenbergii model is transformed into Clemmys guttata and Actinemys marmorata models. Models are loaded and the resulting stresses are compared. The validity of the models is tested by crushing actual turtle shells in a materials testing machine and comparing those results to predictions from FE models. General guidelines, cautions, and possibilities for this procedure are also presented.

  3. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  4. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  5. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  6. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  7. Application of the PJ and NPS evaporation duct models over the South China Sea (SCS) in winter

    PubMed Central

    Yang, Shaobo; Li, Xingfei; Wu, Chao; He, Xin; Zhong, Ying

    2017-01-01

    The detection of duct height has a significant effect on marine radar or wireless apparatus applications. The paper presents two models to verify the adaptation of evaporation duct models in the SCS in winter. A meteorological gradient instrument used to measure evaporation ducts was fabricated using hydrological and meteorological sensors at different heights. An experiment on the adaptive characteristics of evaporation duct models was carried out over the SCS. The heights of the evaporation ducts were measured by means of log-linear fit, Paulus-Jeske (PJ) and Naval Postgraduate School (NPS) models. The results showed that NPS model offered significant advantages in stability compared with the PJ model. According the collected data computed by the NPS model, the mean deviation (MD) was -1.7 m, and the Standard Deviation (STD) of the MD was 0.8 m compared with the true value. The NPS model may be more suitable for estimating the evaporation duct height in the SCS in winter due to its simpler system characteristics compared with meteorological gradient instruments. PMID:28273113

  8. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  9. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  10. Cost-effectiveness analysis of initial treatment strategies for mild-to-moderate Clostridium difficile infection in hospitalized patients.

    PubMed

    Ford, Diana C; Schroeder, Mary C; Ince, Dilek; Ernst, Erika J

    2018-06-14

    The cost-effectiveness of initial treatment strategies for mild-to-moderate Clostridium difficile infection (CDI) in hospitalized patients was evaluated. Decision-analytic models were constructed to compare initial treatment with metronidazole, vancomycin, and fidaxomicin. The primary model included 1 recurrence, and the secondary model included up to 3 recurrences. Model variables were extracted from published literature with costs based on a healthcare system perspective. The primary outcome was the incremental cost-effective ratio (ICER) between initial treatment strategies. In the primary model, the overall percentage of patients cured was 94.23%, 95.19%, and 96.53% with metronidazole, vancomycin, and fidaxomicin, respectively. Expected costs per case were $1,553.01, $1,306.62, and $5,095.70, respectively. In both models, vancomycin was more effective and less costly than metronidazole, resulting in negative ICERs. The ICERs for fidaxomicin compared with those for metronidazole and vancomycin in the primary model were $1,540.23 and $2,828.69 per 1% gain in cure, respectively. Using these models, a hospital currently treating initial episodes of mild-to-moderate CDI with metronidazole could expect to save $246.39-$388.37 per case treated by using vancomycin for initial therapy. A decision-analytic model revealed vancomycin to be cost-effective, compared with metronidazole, for treatment of initial episodes of mild-to-moderate CDI in adult inpatients. From the hospital perspective, initial treatment with vancomycin resulted in a higher probability of cure and a lower probability of colectomy, recurrence, persistent recurrence, and cost per case treated, compared with metronidazole. Use of fidaxomicin was associated with an increased probability of cure compared with metronidazole and vancomycin, but at a substantially increased cost. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  11. East meets West: the influence of racial, ethnic and cultural risk factors on cardiac surgical risk model performance.

    PubMed

    Soo-Hoo, Sarah; Nemeth, Samantha; Baser, Onur; Argenziano, Michael; Kurlansky, Paul

    2018-01-01

    To explore the impact of racial and ethnic diversity on the performance of cardiac surgical risk models, the Chinese SinoSCORE was compared with the Society of Thoracic Surgeons (STS) risk model in a diverse American population. The SinoSCORE risk model was applied to 13 969 consecutive coronary artery bypass surgery patients from twelve American institutions. SinoSCORE risk factors were entered into a logistic regression to create a 'derived' SinoSCORE whose performance was compared with that of the STS risk model. Observed mortality was 1.51% (66% of that predicted by STS model). The SinoSCORE 'low-risk' group had a mortality of 0.15%±0.04%, while the medium-risk and high-risk groups had mortalities of 0.35%±0.06% and 2.13%±0.14%, respectively. The derived SinoSCORE model had a relatively good discrimination (area under of the curve (AUC)=0.785) compared with that of the STS risk score (AUC=0.811; P=0.18 comparing the two). However, specific factors that were significant in the original SinoSCORE but that lacked significance in our derived model included body mass index, preoperative atrial fibrillation and chronic obstructive pulmonary disease. SinoSCORE demonstrated limited discrimination when applied to an American population. The derived SinoSCORE had a discrimination comparable with that of the STS, suggesting underlying similarities of physiological substrate undergoing surgery. However, differential influence of various risk factors suggests that there may be varying degrees of importance and interactions between risk factors. Clinicians should exercise caution when applying risk models across varying populations due to potential differences that racial, ethnic and geographic factors may play in cardiac disease and surgical outcomes.

  12. Multivariate modelling of prostate cancer combining magnetic resonance derived T2, diffusion, dynamic contrast-enhanced and spectroscopic parameters.

    PubMed

    Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M

    2015-05-01

    The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.

  13. Comparative bacterial degradation and detoxification of model and kraft lignin from pulp paper wastewater and its metabolites

    NASA Astrophysics Data System (ADS)

    Abhishek, Amar; Dwivedi, Ashish; Tandan, Neeraj; Kumar, Urwashi

    2017-05-01

    Continuous discharge of lignin containing colored wastewater from pulp paper mill into the environment has resulted in building up their high level in various aquatic systems. In this study, the chemical texture of kraft lignin in terms of pollution parameters (COD, TOC, BOD, etc.) was quite different and approximately twofold higher as compared to model lignin at same optical density (OD 3.7 at 465 nm) and lignin content (2000 mg/L). For comparative bacterial degradation and detoxification of model and kraft lignin two bacteria Citrobacter freundii and Serratia marcescens were isolated, screened and applied in axenic and mixed condition. Bacterial mixed culture was found to decolorize 87 and 70 % model and kraft lignin (2000 mg/L), respectively; whereas, axenic culture Citrobacter freundii and Serratia marcescens decolorized 64, 60 % model and 50, 55 % kraft lignin, respectively, at optimized condition (34 °C, pH 8.2, 140 rpm). In addition, the mixed bacterial culture also showed the removal of 76, 61 % TOC; 80, 67 % COD and 87, 65 % lignin from model and kraft lignin, respectively. High pollution parameters (like TOC, COD, BOD, sulphate) and toxic chemicals slow down the degradation of kraft lignin as compared to model lignin. The comparative GC-MS analysis has suggested that the interspecies collaboration, i.e., each bacterial strain in culture medium has cumulative enhancing effect on growth, and degradation of lignin rather than inhibition. Furthermore, toxicity evaluation on human keratinocyte cell line after bacterial treatment has supported the degradation and detoxification of model and kraft lignin.

  14. Spatial variability of intake fractions for Canadian emission scenarios: a comparison between three resolution scales.

    PubMed

    Manneh, Rima; Margni, Manuele; Deschênes, Louise

    2010-06-01

    Spatially differentiated intake fractions (iFs) linked to Canadian emissions of toxic organic chemicals were developed using the multimedia and multipathways fate and exposure model IMPACT 2002. The fate and exposure of chemicals released to the Canadian environment were modeled with a single regional mass-balance model and three models that provided multiple mass-balance regions within Canada. These three models were based on the Canadian subwatersheds (172 zones), ecozones (15 zones), and provinces (13 zones). Releases of 32 organic chemicals into water and air were considered. This was done in order to (i) assess and compare the spatial variability of iFs within and across the three levels of regionalization and (ii) compare the spatial iFs to nonspatial ones. Results showed that iFs calculated using the subwatershed resolution presented a higher spatial variability (up to 10 orders of magnitude for emissions into water) than the ones based on the ecozones and provinces, implying that higher spatial resolution could potentially reduce uncertainty in iFs and, therefore, increase the discriminating power when assessing and comparing toxic releases for known emission locations. Results also indicated that, for an unknown emission location, a model with high spatial resolution such as the subwatershed model could significantly improve the accuracy of a generic iF. Population weighted iFs span up to 3 orders of magnitude compared to nonspatial iFs calculated by the one-box model. Less significant differences were observed when comparing spatial versus nonspatial iFs from the ecozones and provinces, respectively.

  15. Eating disorders among professional fashion models.

    PubMed

    Preti, Antonio; Usai, Ambra; Miotto, Paola; Petretto, Donatella Rita; Masala, Carmelo

    2008-05-30

    Fashion models are thought to be at an elevated risk for eating disorders, but few methodologically rigorous studies have explored this assumption. We have investigated the prevalence of eating disorders in a group of 55 fashion models born in Sardinia, Italy, comparing them with a group of 110 girls of the same age and of comparable social and cultural backgrounds. The study was based on questionnaires and face-to-face interviews, to reduce the bias due to symptom under-reporting and to social desirability responding. When compared on three well-validated self-report questionnaires (the EAT, BITE, BAT), the models and controls did not differ significantly. However, in a detailed interview (the Eating Disorder Examination), models reported significantly more symptoms of eating disorders than controls, and a higher prevalence of partial syndromes of eating disorders was found in models than in controls. A body mass index below 18 was found for 34 models (54.5%) as compared with 14 controls (12.7%). Three models (5%) and no controls reported an earlier clinical diagnosis of anorexia nervosa. Further studies will be necessary to establish whether the slight excess of partial syndromes of eating disorders among fashion models was a consequence of the requirement in the profession to maintain a slim figure or if the fashion modeling profession is preferably chosen by girls already oriented towards symptoms of eating disorders, since the pressure to be thin imposed by this profession can be more easily accepted by people predisposed to eating disorders.

  16. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Treesearch

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  17. Multi-criteria comparative evaluation of spallation reaction models

    NASA Astrophysics Data System (ADS)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  18. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  19. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  20. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  1. Comparative Analysis of Smart Meters Deployment Business Models on the Example of the Russian Federation Markets

    NASA Astrophysics Data System (ADS)

    Daminov, Ildar; Tarasova, Ekaterina; Andreeva, Tatyana; Avazov, Artur

    2016-02-01

    This paper presents the comparison of smart meter deployment business models to determine the most suitable option providing smart meters deployment. Authors consider 3 main business model of companies: distribution grid company, energy supplier (energosbyt) and metering company. The goal of the article is to compare the business models of power companies from massive smart metering roll out in power system of Russian Federation.

  2. Understanding the relationship between audiomagnetotelluric data and models, and borehole data in a hydrological environment

    USGS Publications Warehouse

    McPhee, D.K.; Pellerin, L.

    2008-01-01

    Audiomagnetotelluric (AMT) data and resulting models are analyzed with respect to geophysical and geological borehole logs in order to clarify the relationship between the two methodologies of investigation of a hydrological environment. Several profiles of AMT data collected in basins in southwestern United States are being used for groundwater exploration and hydrogeological framework studies. In a systematic manner, the AMT data and models are compared to borehole data by computing the equivalent one-dimensional AMT model and comparing with the two-dimensional (2-D) inverse AMT model. The spatial length is used to determine if the well is near enough to the AMT profile to quantify the relationship between the two datasets, and determine the required resolution of the AMT data and models. The significance of the quality of the borehole data when compared to the AMT data is also examined.

  3. 2D- and 3D-quantitative structure-activity relationship studies for a series of phenazine N,N'-dioxide as antitumour agents.

    PubMed

    Cunha, Jonathan Da; Lavaggi, María Laura; Abasolo, María Inés; Cerecetto, Hugo; González, Mercedes

    2011-12-01

    Hypoxic regions of tumours are associated with increased resistance to radiation and chemotherapy. Nevertheless, hypoxia has been used as a tool for specific activation of some antitumour prodrugs, named bioreductive agents. Phenazine dioxides are an example of such bioreductive prodrugs. Our 2D-quantitative structure activity relationship studies established that phenazine dioxides electronic and lipophilic descriptors are related to survival fraction in oxia or in hypoxia. Additionally, statistically significant models, derived by partial least squares, were obtained between survival fraction in oxia and comparative molecular field analysis standard model (r² = 0.755, q² = 0.505 and F = 26.70) or comparative molecular similarity indices analysis-combined steric and electrostatic fields (r² = 0.757, q² = 0.527 and F = 14.93), and survival fraction in hypoxia and comparative molecular field analysis standard model (r² = 0.736, q² = 0.521 and F = 18.63) or comparative molecular similarity indices analysis-hydrogen bond acceptor field (r² = 0.858, q² = 0.737 and F = 27.19). Categorical classification was used for the biological parameter selective cytotoxicity emerging also good models, derived by soft independent modelling of class analogy, with both comparative molecular field analysis standard model (96% of overall classification accuracy) and comparative molecular similarity indices analysis-steric field (92% of overall classification accuracy). 2D- and 3D-quantitative structure-activity relationships models provided important insights into the chemical and structural basis involved in the molecular recognition process of these phenazines as bioreductive agents and should be useful for the design of new structurally related analogues with improved potency. © 2011 John Wiley & Sons A/S.

  4. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  5. [Fabrication and accuracy research on 3D printing dental model based on cone beam computed tomography digital modeling].

    PubMed

    Zhang, Hui-Rong; Yin, Le-Feng; Liu, Yan-Li; Yan, Li-Yi; Wang, Ning; Liu, Gang; An, Xiao-Li; Liu, Bin

    2018-04-01

    The aim of this study is to build a digital dental model with cone beam computed tomography (CBCT), to fabricate a virtual model via 3D printing, and to determine the accuracy of 3D printing dental model by comparing the result with a traditional dental cast. CBCT of orthodontic patients was obtained to build a digital dental model by using Mimics 10.01 and Geomagic studio software. The 3D virtual models were fabricated via fused deposition modeling technique (FDM). The 3D virtual models were compared with the traditional cast models by using a Vernier caliper. The measurements used for comparison included the width of each tooth, the length and width of the maxillary and mandibular arches, and the length of the posterior dental crest. 3D printing models had higher accuracy compared with the traditional cast models. The results of the paired t-test of all data showed that no statistically significant difference was observed between the two groups (P>0.05). Dental digital models built with CBCT realize the digital storage of patients' dental condition. The virtual dental model fabricated via 3D printing avoids traditional impression and simplifies the clinical examination process. The 3D printing dental models produced via FDM show a high degree of accuracy. Thus, these models are appropriate for clinical practice.

  6. Regional climate model downscaling may improve the prediction of alien plant species distributions

    NASA Astrophysics Data System (ADS)

    Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.

    2014-12-01

    Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.

  7. Quality control of the RMS US flood model

    NASA Astrophysics Data System (ADS)

    Jankowfsky, Sonja; Hilberts, Arno; Mortgat, Chris; Li, Shuangcai; Rafique, Farhat; Rajesh, Edida; Xu, Na; Mei, Yi; Tillmanns, Stephan; Yang, Yang; Tian, Ye; Mathur, Prince; Kulkarni, Anand; Kumaresh, Bharadwaj Anna; Chaudhuri, Chiranjib; Saini, Vishal

    2016-04-01

    The RMS US flood model predicts the flood risk in the US with a 30 m resolution for different return periods. The model is designed for the insurance industry to estimate the cost of flood risk for a given location. Different statistical, hydrological and hydraulic models are combined to develop the flood maps for different return periods. A rainfall-runoff and routing model, calibrated with observed discharge data, is run with 10 000 years of stochastic simulated precipitation to create time series of discharge and surface runoff. The 100, 250 and 500 year events are extracted from these time series as forcing for a two-dimensional pluvial and fluvial inundation model. The coupling of all the different models which are run on the large area of the US implies a certain amount of uncertainty. Therefore, special attention is paid to the final quality control of the flood maps. First of all, a thorough quality analysis of the Digital Terrain model and the river network was done, as the final quality of the flood maps depends heavily on the DTM quality. Secondly, the simulated 100 year discharge in the major river network (600 000 km) is compared to the 100 year discharge derived using extreme value distribution of all USGS gauges with more than 20 years of peak values (around 11 000 gauges). Thirdly, for each gauge the modelled flood depth is compared to the depth derived from the USGS rating curves. Fourthly, the modelled flood depth is compared to the base flood elevation given in the FEMA flood maps. Fifthly, the flood extent is compared to the FEMA flood extent. Then, for historic events we compare flood extents and flood depths at given locations. Finally, all the data and spatial layers are uploaded on geoserver to facilitate the manual investigation of outliers. The feedback from the quality control is used to improve the model and estimate its uncertainty.

  8. Mesoscale Simulation Data for Initializing Fast-Time Wake Transport and Decay Models

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.; Vanvalkenburg, Randal L.; Pruis, Mathew J.; LimonDuparcmeur, Fanny M.

    2012-01-01

    The fast-time wake transport and decay models require vertical profiles of crosswinds, potential temperature and the eddy dissipation rate as initial conditions. These inputs are normally obtained from various field sensors. In case of data-denied scenarios or operational use, these initial conditions can be provided by mesoscale model simulations. In this study, the vertical profiles of potential temperature from a mesoscale model were used as initial conditions for the fast-time wake models. The mesoscale model simulations were compared against available observations and the wake model predictions were compared with the Lidar measurements from three wake vortex field experiments.

  9. Comparison Between Surf and Multi-Shock Forest Fire High Explosive Burn Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenfield, Nicholas Alexander

    PAGOSA1 has several different burn models used to model high explosive detonation. Two of these, Multi-Shock Forest Fire and Surf, are capable of modeling shock initiation. Accurately calculating shock initiation of a high explosive is important because it is a mechanism for detonation in many accident scenarios (i.e. fragment impact). Comparing the models to pop-plot data give confidence that the models are accurately calculating detonation or lack thereof. To compare the performance of these models, pop-plots2 were created from simulations where one two cm block of PBX 9502 collides with another block of PBX 9502.

  10. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments ofmore » the tether.« less

  11. Performance specifications and six sigma theory: Clinical chemistry and industry compared.

    PubMed

    Oosterhuis, W P; Severens, M J M J

    2018-04-11

    Analytical performance specifications are crucial in test development and quality control. Although consensus has been reached on the use of biological variation to derive these specifications, no consensus has been reached which model should be preferred. The Six Sigma concept is widely applied in industry for quality specifications of products and can well be compared with Six Sigma models in clinical chemistry. However, the models for measurement specifications differ considerably between both fields: where the sigma metric is used in clinical chemistry, in industry the Number of Distinct Categories is used instead. In this study the models in both fields are compared and discussed. Copyright © 2018. Published by Elsevier Inc.

  12. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    NASA Astrophysics Data System (ADS)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  13. Cluster-based upper body marker models for three-dimensional kinematic analysis: Comparison with an anatomical model and reliability analysis.

    PubMed

    Boser, Quinn A; Valevicius, Aïda M; Lavoie, Ewen B; Chapman, Craig S; Pilarski, Patrick M; Hebert, Jacqueline S; Vette, Albert H

    2018-04-27

    Quantifying angular joint kinematics of the upper body is a useful method for assessing upper limb function. Joint angles are commonly obtained via motion capture, tracking markers placed on anatomical landmarks. This method is associated with limitations including administrative burden, soft tissue artifacts, and intra- and inter-tester variability. An alternative method involves the tracking of rigid marker clusters affixed to body segments, calibrated relative to anatomical landmarks or known joint angles. The accuracy and reliability of applying this cluster method to the upper body has, however, not been comprehensively explored. Our objective was to compare three different upper body cluster models with an anatomical model, with respect to joint angles and reliability. Non-disabled participants performed two standardized functional upper limb tasks with anatomical and cluster markers applied concurrently. Joint angle curves obtained via the marker clusters with three different calibration methods were compared to those from an anatomical model, and between-session reliability was assessed for all models. The cluster models produced joint angle curves which were comparable to and highly correlated with those from the anatomical model, but exhibited notable offsets and differences in sensitivity for some degrees of freedom. Between-session reliability was comparable between all models, and good for most degrees of freedom. Overall, the cluster models produced reliable joint angles that, however, cannot be used interchangeably with anatomical model outputs to calculate kinematic metrics. Cluster models appear to be an adequate, and possibly advantageous alternative to anatomical models when the objective is to assess trends in movement behavior. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. INFLUENCE OF MATERIAL MODELS ON PREDICTING THE FIRE BEHAVIOR OF STEEL COLUMNS.

    PubMed

    Choe, Lisa; Zhang, Chao; Luecke, William E; Gross, John L; Varma, Amit H

    2017-01-01

    Finite-element (FE) analysis was used to compare the high-temperature responses of steel columns with two different stress-strain models: the Eurocode 3 model and the model proposed by National Institute of Standards and Technology (NIST). The comparisons were made in three different phases. The first phase compared the critical buckling temperatures predicted using forty seven column data from five different laboratories. The slenderness ratios varied from 34 to 137, and the applied axial load was 20-60 % of the room-temperature capacity. The results showed that the NIST model predicted the buckling temperature as or more accurately than the Eurocode 3 model for four of the five data sets. In the second phase, thirty unique FE models were developed to analyze the W8×35 and W14×53 column specimens with the slenderness ratio about 70. The column specimens were tested under steady-heating conditions with a target temperature in the range of 300-600 °C. The models were developed by combining the material model, temperature distributions in the specimens, and numerical scheme for non-linear analyses. Overall, the models with the NIST material properties and the measured temperature variations showed the results comparable to the test data. The deviations in the results from two different numerical approaches (modified Newton Raphson vs. arc-length) were negligible. The Eurocode 3 model made conservative predictions on the behavior of the column specimens since its retained elastic moduli are smaller than those of the NIST model at elevated temperatures. In the third phase, the column curves calibrated using the NIST model was compared with those prescribed in the ANSI/AISC-360 Appendix 4. The calibrated curve significantly deviated from the current design equation with increasing temperature, especially for the slenderness ratio from 50 to 100.

  15. Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.

    PubMed

    Larson, David B; Chen, Matthew C; Lungren, Matthew P; Halabi, Safwan S; Stence, Nicholas V; Langlotz, Curtis P

    2018-04-01

    Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.

  16. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  17. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.

  18. Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.

  19. Analysis of terahertz dielectric properties of pork tissue

    NASA Astrophysics Data System (ADS)

    Huang, Yuqing; Xie, Qiaoling; Sun, Ping

    2017-10-01

    Seeing that about 70% component of fresh biological tissues is water, many scientists try to use water models to describe the dielectric properties of biological tissues. The classical water dielectric models are Debye model, Double Debye model and Cole-Cole model. This work aims to determine a suitable model by comparing three models above with experimental data. These models are applied to fresh pork tissue. By means of least square method, the parameters of different models are fitted with the experimental data. Comparing different models on both dielectric function, the Cole-Cole model is verified the best to describe the experiments of pork tissue. The correction factor α of the Cole-Cole model is an important modification for biological tissues. So Cole-Cole model is supposed to be a priority selection to describe the dielectric properties for biological tissues in the terahertz range.

  20. Sea-Salt Aerosol Forecasts Compared with Wave and Sea-Salt Measurements in the Open Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Kishcha, P.; Starobinets, B.; Bozzano, R.; Pensieri, S.; Canepa, E.; Nickovie, S.; di Sarra, A.; Udisti, R.; Becagli, S.; Alpert, P.

    2012-03-01

    Sea-salt aerosol (SSA) could influence the Earth's climate acting as cloud condensation nuclei. However, there were no regular measurements of SSA in the open sea. At Tel-Aviv University, the DREAM-Salt prediction system has been producing daily forecasts of 3-D distribution of sea-salt aerosol concentrations over the Mediterranean Sea (http://wind.tau.ac.il/saltina/ salt.html). In order to evaluate the model performance in the open sea, daily modeled concentrations were compared directly with SSA measurements taken at the tiny island of Lampedusa, in the Central Mediterranean. In order to further test the robustness of the model, the model performance over the open sea was indirectly verified by comparing modeled SSA concentrations with wave height measurements collected by the ODAS Italia 1 buoy and the Llobregat buoy. Model-vs.-measurement comparisons show that the model is capable of producing realistic SSA concentrations and their day-today variations over the open sea, in accordance with observed wave height and wind speed.

  1. An interim prosthesis program for lower limb amputees: comparison of public and private models of service.

    PubMed

    Gordon, Robert; Magee, Christopher; Frazer, Anna; Evans, Craig; McCosker, Kathryn

    2010-06-01

    This study compared the outcomes of an interim mechanical prosthesis program for lower limb amputees operated under a public and private model of service. Over a two-year period, 60 transtibial amputees were fitted with an interim prosthesis as part of their early amputee care. Thirty-four patients received early amputee care under a public model of service, whereby a prosthetist was employed to provide the interim mechanical prosthesis service. The remaining 26 patients received early amputee care under a private model of service, where an external company was contracted to provide the interim mechanical prosthesis service. The results suggested comparable clinical outcomes between the two patient groups. However, the public model appeared to be less expensive with the average labour cost per patient being 29.0% lower compared with the private model. The results suggest that a public model of service may provide a more comprehensive and less expensive interim prosthesis program for lower limb amputees.

  2. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  3. Predicting ICU mortality: a comparison of stationary and nonstationary temporal models.

    PubMed Central

    Kayaalp, M.; Cooper, G. F.; Clermont, G.

    2000-01-01

    OBJECTIVE: This study evaluates the effectiveness of the stationarity assumption in predicting the mortality of intensive care unit (ICU) patients at the ICU discharge. DESIGN: This is a comparative study. A stationary temporal Bayesian network learned from data was compared to a set of (33) nonstationary temporal Bayesian networks learned from data. A process observed as a sequence of events is stationary if its stochastic properties stay the same when the sequence is shifted in a positive or negative direction by a constant time parameter. The temporal Bayesian networks forecast mortalities of patients, where each patient has one record per day. The predictive performance of the stationary model is compared with nonstationary models using the area under the receiver operating characteristics (ROC) curves. RESULTS: The stationary model usually performed best. However, one nonstationary model using large data sets performed significantly better than the stationary model. CONCLUSION: Results suggest that using a combination of stationary and nonstationary models may predict better than using either alone. PMID:11079917

  4. Comparative analysis of zonal systems for macro-level crash modeling.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-06-01

    Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  5. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  6. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Comparison of different synthetic 5-min rainfall time series regarding their suitability for urban drainage modelling

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András

    2015-04-01

    For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.

  8. A Fractional Cartesian Composition Model for Semi-Spatial Comparative Visualization Design.

    PubMed

    Kolesar, Ivan; Bruckner, Stefan; Viola, Ivan; Hauser, Helwig

    2017-01-01

    The study of spatial data ensembles leads to substantial visualization challenges in a variety of applications. In this paper, we present a model for comparative visualization that supports the design of according ensemble visualization solutions by partial automation. We focus on applications, where the user is interested in preserving selected spatial data characteristics of the data as much as possible-even when many ensemble members should be jointly studied using comparative visualization. In our model, we separate the design challenge into a minimal set of user-specified parameters and an optimization component for the automatic configuration of the remaining design variables. We provide an illustrated formal description of our model and exemplify our approach in the context of several application examples from different domains in order to demonstrate its generality within the class of comparative visualization problems for spatial data ensembles.

  9. Case Studies Comparing System Advisor Model (SAM) Results to Real Performance Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, N.; Dobos, A.; Sather, N.

    2012-06-01

    NREL has completed a series of detailed case studies comparing the simulations of the System Advisor Model (SAM) and measured performance data or published performance expectations. These case studies compare PV measured performance data with simulated performance data using appropriate weather data. The measured data sets were primarily taken from NREL onsite PV systems and weather monitoring stations.

  10. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  11. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  12. Evaluation of black carbon estimations in global aerosol models

    NASA Astrophysics Data System (ADS)

    Koch, D.; Schulz, M.; Kinne, S.; McNaughton, C.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, T. C.; Boucher, O.; Chin, M.; Clarke, A.; de Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, R.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, S.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevåg, A.; Klimont, Z.; Kondo, Y.; Krol, M.; Liu, X.; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J. E.; Perlwitz, J.; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, Ø.; Stier, P.; Takegawa, N.; Takemura, T.; Textor, C.; van Aardenne, J. A.; Zhao, Y.

    2009-11-01

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) retrievals from AERONET and Ozone Monitoring Instrument (OMI) and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the range represented by the full set of AeroCom models. Upper tropospheric concentrations of BC mass from the aircraft measurements are suggested to provide a unique new benchmark to test scavenging and vertical dispersion of BC in global models.

  13. A Modified Mechanical Threshold Stress Constitutive Model for Austenitic Stainless Steels

    NASA Astrophysics Data System (ADS)

    Prasad, K. Sajun; Gupta, Amit Kumar; Singh, Yashjeet; Singh, Swadesh Kumar

    2016-12-01

    This paper presents a modified mechanical threshold stress (m-MTS) constitutive model. The m-MTS model incorporates variable athermal and dynamic strain aging (DSA) Components to accurately predict the flow stress behavior of austenitic stainless steels (ASS)-316 and 304. Under strain rate variations between 0.01-0.0001 s-1, uniaxial tensile tests were conducted at temperatures ranging from 50-650 °C to evaluate the material constants of constitutive models. The test results revealed the high dependence of flow stress on strain, strain rate and temperature. In addition, it was observed that DSA occurred at elevated temperatures and very low strain rates, causing an increase in flow stress. While the original MTS model is capable of predicting the flow stress behavior for ASS, statistical parameters point out the inefficiency of the model when compared to other models such as Johnson Cook model, modified Zerilli-Armstrong (m-ZA) model, and modified Arrhenius-type equations (m-Arr). Therefore, in order to accurately model both the DSA and non-DSA regimes, the original MTS model was modified by incorporating variable athermal and DSA components. The suitability of the m-MTS model was assessed by comparing the statistical parameters. It was observed that the m-MTS model was highly accurate for the DSA regime when compared to the existing models. However, models like m-ZA and m-Arr showed better results for the non-DSA regime.

  14. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    NASA Astrophysics Data System (ADS)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  15. Validation of numerical models for flow simulation in labyrinth seals

    NASA Astrophysics Data System (ADS)

    Frączek, D.; Wróblewski, W.

    2016-10-01

    CFD results were compared with the results of experiments for the flow through the labyrinth seal. RANS turbulence models (k-epsilon, k-omega, SST and SST-SAS) were selected for the study. Steady and transient results were analyzed. ANSYS CFX was used for numerical computation. The analysis included flow through sealing section with the honeycomb land. Leakage flows and velocity profiles in the seal were compared. In addition to the comparison of computational models, the divergence of modeling and experimental results has been determined. Tips for modeling these problems were formulated.

  16. Comment on 'Parametrization of Stillinger-Weber potential based on a valence force field model: application to single-layer MoS2 and black phosphorus'.

    PubMed

    Midtvedt, Daniel; Croy, Alexander

    2016-06-10

    We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.

  17. Modeling Atmospheric Aerosols in WRF/Chem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yang; Hu, X.-M.; Howell, G.

    2005-06-01

    In this study, three aerosol modules are tested and compared. The first module is the Modal Aerosol Dynamics Model for Europe (MADE) with the secondary organic aerosol model (SORGAM) (referred to as MADE/SORGAM). The second module is the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC). The third module is the Model of Aerosol Dynamics, Reaction, Ionization and Dissolution (MADRID). The three modules differ in terms of size representation used, chemical species treated, assumptions and numerical algorithms used. Table 1 compares the major processes among the three aerosol modules.

  18. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

  19. Egg production forecasting: Determining efficient modeling approaches.

    PubMed

    Ahmad, H A

    2011-12-01

    Several mathematical or statistical and artificial intelligence models were developed to compare egg production forecasts in commercial layers. Initial data for these models were collected from a comparative layer trial on commercial strains conducted at the Poultry Research Farms, Auburn University. Simulated data were produced to represent new scenarios by using means and SD of egg production of the 22 commercial strains. From the simulated data, random examples were generated for neural network training and testing for the weekly egg production prediction from wk 22 to 36. Three neural network architectures-back-propagation-3, Ward-5, and the general regression neural network-were compared for their efficiency to forecast egg production, along with other traditional models. The general regression neural network gave the best-fitting line, which almost overlapped with the commercial egg production data, with an R(2) of 0.71. The general regression neural network-predicted curve was compared with original egg production data, the average curves of white-shelled and brown-shelled strains, linear regression predictions, and the Gompertz nonlinear model. The general regression neural network was superior in all these comparisons and may be the model of choice if the initial overprediction is managed efficiently. In general, neural network models are efficient, are easy to use, require fewer data, and are practical under farm management conditions to forecast egg production.

  20. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  1. Comparing approaches to spatially explicit ecosystem service modeling: a case study from the San Pedro River, Arizona

    USGS Publications Warehouse

    Bagstad, Kenneth J.; Semmens, Darius J.; Winthrop, Robert

    2013-01-01

    Although the number of ecosystem service modeling tools has grown in recent years, quantitative comparative studies of these tools have been lacking. In this study, we applied two leading open-source, spatially explicit ecosystem services modeling tools – Artificial Intelligence for Ecosystem Services (ARIES) and Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) – to the San Pedro River watershed in southeast Arizona, USA, and northern Sonora, Mexico. We modeled locally important services that both modeling systems could address – carbon, water, and scenic viewsheds. We then applied managerially relevant scenarios for urban growth and mesquite management to quantify ecosystem service changes. InVEST and ARIES use different modeling approaches and ecosystem services metrics; for carbon, metrics were more similar and results were more easily comparable than for viewsheds or water. However, findings demonstrate similar gains and losses of ecosystem services and conclusions when comparing effects across our scenarios. Results were more closely aligned for landscape-scale urban-growth scenarios and more divergent for a site-scale mesquite-management scenario. Follow-up studies, including testing in different geographic contexts, can improve our understanding of the strengths and weaknesses of these and other ecosystem services modeling tools as they move closer to readiness for supporting day-to-day resource management.

  2. Development of a Cadaveric Model for Arthrocentesis.

    PubMed

    MacIver, Melissa A; Johnson, Matthew

    2015-01-01

    This article reports the development of a novel cadaveric model for future use in teaching arthrocentesis. In the clinical setting, animal safety is essential and practice is thus limited. Objectives of the study were to develop and compare a model to an unmodified cadaver by injecting one of two types of fluids to increase yield. The two fluids injected, mineral oil (MO) and hypertonic saline (HS), were compared to determine any difference on yield. Lastly, aspiration immediately after (T1) or three hours after (T2) injection were compared to determine any effect on diagnostic yield. Joints used included the stifle, elbow, and carpus in eight medium dog cadavers. Arthrocentesis was performed before injection (control) and yield measured. Test joints were injected with MO or HS and yield measured after range of motion (T1) and three hours post injection to simulate lab preparation (T2). Both models had statistically significantly higher yield compared with the unmodified cadaver in all joints at T1 and T2 (p<.05) with the exception of HST2 carpus. T2 aspiration had a statistically significant lower yield when compared to T1HS carpus, T1HS elbow, and T1MO carpus. Overall, irrespective of fluid volume or type, percent yield was lower in T2 compared to T1. No statistically significant difference was seen between HS and MO in most joints with the exception of MOT1 stifle and HST2 elbow. Within the time frame assessed, both models were acceptable. However, HS arthrocentesis models proved appropriate for student trial due to the difficult aspirations with MO.

  3. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  4. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment

    ERIC Educational Resources Information Center

    Warner, Zachary B.

    2013-01-01

    This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…

  5. Comparing the Effects of Echoic Prompts and Echoic Prompts Plus Modeled Prompts on Intraverbal Behavior

    ERIC Educational Resources Information Center

    Valentino, Amber L.; Shillingsburg, M. Alice; Call, Nathan A.

    2012-01-01

    We compared strategies to teach vocal intraverbal responses to an adolescent diagnosed with autism and Down syndrome. One strategy involved echoic prompts only. The second strategy involved an echoic prompt paired with a modeled prompt in the form of sign language. Presenting the modeled prompt with the echoic prompt resulted in faster acquisition…

  6. Online or on Campus: A Student Tertiary Education Cost Model Comparing the Two, with a Quality Proviso

    ERIC Educational Resources Information Center

    Ioakimidis, Marilou

    2007-01-01

    This paper presents the development and validation of a two-level hierarchical cost model for tertiary education, which enables prospective students to compare the total cost of attending a traditional Baccalaureate degree education with that of the same programme taken through distance e-learning. The model was validated by a sample of Greek…

  7. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Treesearch

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  8. Application of Bayesian methods to habitat selection modeling of the northern spotted owl in California: new statistical methods for wildlife research

    Treesearch

    Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk

    2005-01-01

    We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...

  9. Teaching Social-Communication Skills to Preschoolers with Autism: Efficacy of Video versus in Vivo Modeling in the Classroom

    ERIC Educational Resources Information Center

    Wilson, Kaitlyn P.

    2013-01-01

    Video modeling is a time- and cost-efficient intervention that has been proven effective for children with autism spectrum disorder (ASD); however, the comparative efficacy of this intervention has not been examined in the classroom setting. The present study examines the relative efficacy of video modeling as compared to the more widely-used…

  10. Teaching Daily Living Skills to Seven Individuals with Severe Intellectual Disabilities: A Comparison of Video Prompting to Video Modeling

    ERIC Educational Resources Information Center

    Cannella-Malone, Helen I.; Fleming, Courtney; Chung, Yi-Cheih; Wheeler, Geoffrey M.; Basbagill, Abby R.; Singh, Angella H.

    2011-01-01

    We conducted a systematic replication of Cannella-Malone et al. by comparing the effects of video prompting to video modeling for teaching seven students with severe disabilities to do laundry and wash dishes. The video prompting and video modeling procedures were counterbalanced across tasks and participants and compared in an alternating…

  11. Lease vs. Purchase Analysis of Alternative Fuel Vehicles in the United States Marine Corps

    DTIC Science & Technology

    2009-12-01

    data (2004 to 2009) for the largest populations of AFVs in the light-duty category and then apply a model that will compare the two alternatives based...the largest populations of AFVs in the light-duty category and then apply a model that will compare the two alternatives based on their relative net...28 IV. THE MODEL

  12. Lease VS Purchase Analysis of Alternative Fuel Vehicles in the United States Marine Corps

    DTIC Science & Technology

    2009-10-30

    the light-duty category and then apply a model that will compare the two alternatives based on their relative net present values. An aggregated view of... model that will compare the two alternatives based on their relative net present values. An aggregated view of several different light-duty AFV...Summary .......................................................................................32  IV.  The Model

  13. Simulation Study on Fit Indexes in CFA Based on Data with Slightly Distorted Simple Structure

    ERIC Educational Resources Information Center

    Beauducel, Andre; Wittmann, Werner W.

    2005-01-01

    Fit indexes were compared with respect to a specific type of model misspecification. Simple structure was violated with some secondary loadings that were present in the true models that were not specified in the estimated models. The c2 test, Comparative Fit Index, Goodness-of-Fit Index, Incremental Fit Index, Nonnormed Fit Index, root mean…

  14. Multiple-Use Site Demand Analysis: An Application to the Boundary Waters Canoe Area Wilderness.

    ERIC Educational Resources Information Center

    Peterson, George L.; And Others

    1982-01-01

    A single-site, multiple-use model for analyzing trip demand is derived from a multiple site regional model based on utility maximizing choice theory. The model is used to analyze and compare trips to the Boundary Waters Canoe Area Wilderness for several types of use. Travel cost elasticities of demand are compared and discussed. (Authors/JN)

  15. CoopEUS Case Study: Tsunami Modelling and Early Warning Systems for Near Source Areas (Mediterranean, Juan de Fuca).

    NASA Astrophysics Data System (ADS)

    Beranzoli, Laura; Best, Mairi; Chierici, Francesco; Embriaco, Davide; Galbraith, Nan; Heeseman, Martin; Kelley, Deborah; Pirenne, Benoit; Scofield, Oscar; Weller, Robert

    2015-04-01

    There is a need for tsunami modeling and early warning systems for near-source areas. For example this is a common public safety threat in the Mediterranean and Juan de Fuca/NE Pacific Coast of N.A.; Regions covered by the EMSO, OOI, and ONC ocean observatories. Through the CoopEUS international cooperation project, a number of environmental research infrastructures have come together to coordinate efforts on environmental challenges; this tsunami case study tackles one such challenge. There is a mutual need of tsunami event field data and modeling to deepen our experience in testing methodology and developing real-time data processing. Tsunami field data are already available for past events, part of this use case compares these for compatibility, gap analysis, and model groundtruthing. It also reviews sensors needed and harmonizes instrument settings. Sensor metadata and registries are compared, harmonized, and aligned. Data policies and access are also compared and assessed for gap analysis. Modelling algorithms are compared and tested against archived and real-time data. This case study will then be extended to other related tsunami data and model sources globally with similar geographic and seismic scenarios.

  16. Solar irradiance reduction to counteract radiative forcing from a quadrupling of CO2: climate responses simulated by four earth system models

    NASA Astrophysics Data System (ADS)

    Schmidt, H.; Alterskjær, K.; Karam, D. Bou; Boucher, O.; Jones, A.; Kristjánsson, J. E.; Niemeier, U.; Schulz, M.; Aaheim, A.; Benduhn, F.; Lawrence, M.; Timmreck, C.

    2012-06-01

    In this study we compare the response of four state-of-the-art Earth system models to climate engineering under scenario G1 of two model intercomparison projects: GeoMIP (Geoengineering Model Intercomparison Project) and IMPLICC (EU project "Implications and risks of engineering solar radiation to limit climate change"). In G1, the radiative forcing from an instantaneous quadrupling of the CO2 concentration, starting from the preindustrial level, is balanced by a reduction of the solar constant. Model responses to the two counteracting forcings in G1 are compared to the preindustrial climate in terms of global means and regional patterns and their robustness. While the global mean surface air temperature in G1 remains almost unchanged compared to the control simulation, the meridional temperature gradient is reduced in all models. Another robust response is the global reduction of precipitation with strong effects in particular over North and South America and northern Eurasia. In comparison to the climate response to a quadrupling of CO2 alone, the temperature responses are small in experiment G1. Precipitation responses are, however, in many regions of comparable magnitude but globally of opposite sign.

  17. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  18. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    PubMed

    Folly, Walter Sydney Dutra

    2011-01-01

    Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  19. Proposed biokinetic model for phosphorus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard Wayne

    2014-06-04

    This paper reviews data related to the biokinetics of phosphorus in the human body and proposes a biokinetic model for systemic phosphorus for use in updated International Commission on Radiological Protection (ICRP) guidance on occupational intake of radionuclides. Compared with the ICRP s current occupational model for phosphorus (Publication 68, 1994) the proposed model provides a more realistic description of the paths of movement of phosphorus in the body and improved consistency with experimental, medical, and environmental data on the time-dependent distribution and retention of phosphorus following uptake to blood. For acute uptake of 32P to blood, the proposed modelmore » yields roughly a 50% decrease in dose estimates for bone surface and red marrow and a 6-fold increase in estimates for liver and kidney compared with the biokinetic model of Publication 68 (applying Publication 68 dosimetric models in both sets of calculations). For acute uptake of 33P to blood, the proposed model yields roughly a 50% increase in dose estimates for bone surface and red marrow and a 7-fold increase in estimates for liver and kidney compared with the model of Publication 68.« less

  20. Comparsion analysis of data mining models applied to clinical research in traditional Chinese medicine.

    PubMed

    Zhao, Yufeng; Xie, Qi; He, Liyun; Liu, Baoyan; Li, Kun; Zhang, Xiang; Bai, Wenjing; Luo, Lin; Jing, Xianghong; Huo, Ruili

    2014-10-01

    To help researchers selecting appropriate data mining models to provide better evidence for the clinical practice of Traditional Chinese Medicine (TCM) diagnosis and therapy. Clinical issues based on data mining models were comprehensively summarized from four significant elements of the clinical studies: symptoms, symptom patterns, herbs, and efficacy. Existing problems were further generalized to determine the relevant factors of the performance of data mining models, e.g. data type, samples, parameters, variable labels. Combining these relevant factors, the TCM clinical data features were compared with regards to statistical characters and informatics properties. Data models were compared simultaneously from the view of applied conditions and suitable scopes. The main application problems were the inconsistent data type and the small samples for the used data mining models, which caused the inappropriate results, even the mistake results. These features, i.e. advantages, disadvantages, satisfied data types, tasks of data mining, and the TCM issues, were summarized and compared. By aiming at the special features of different data mining models, the clinical doctors could select the suitable data mining models to resolve the TCM problem.

  1. Comparing SEBAL and METRIC: Evapotranspiration Models Applied to Paramount Farms Almond Orchards

    NASA Astrophysics Data System (ADS)

    Furey, B. J.; Kefauver, S. C.

    2011-12-01

    Two evapotranspiration models were applied to almond and pistachio orchards in California. The SEBAL model, developed by W.G.M. Bastiaanssen, was programmed in MatLab for direct comparison to the METRIC model, developed by R.G. Allen and the IDWR. Remote sensing data from the NASA SARP 2011 Airborne Research Program was used in the application of these models. An evaluation of the models showed that they both followed the same pattern in evapotranspiration (ET) rates for different types of ground cover. The models exhibited a slightly different range of values and appeared to be related (non-linearly). The models both underestimated the actual ET at the CIMIS weather station. However, SEBAL overestimated the ET of the almond orchards by 0.16 mm/hr when applying its crop coefficient to the reference ET. This is compared to METRIC, which underestimated the ET of the almond orchards by only 0.10 mm/hr. Other types of ground cover were similarly compared. Temporal variability in ET rates between the morning and afternoon were also observed.

  2. Modeling of Turbulent Boundary Layer Surface Pressure Fluctuation Auto and Cross Spectra - Verification and Adjustments Based on TU-144LL Data

    NASA Technical Reports Server (NTRS)

    Rackl, Robert; Weston, Adam

    2005-01-01

    The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.

  3. Comparison of type 2 diabetes prevalence estimates in Saudi Arabia from a validated Markov model against the International Diabetes Federation and other modelling studies

    PubMed Central

    Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin

    2014-01-01

    Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810

  4. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  5. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  6. Simulation of streamflow and sediment transport in two surface-coal-mined basins in Fayette County, Pennsylvania

    USGS Publications Warehouse

    Sams, J. I.; Witt, E. C.

    1995-01-01

    The Hydrological Simulation Program - Fortran (HSPF) was used to simulate streamflow and sediment transport in two surface-mined basins of Fayette County, Pa. Hydrologic data from the Stony Fork Basin (0.93 square miles) was used to calibrate HSPF parameters. The calibrated parameters were applied to an HSPF model of the Poplar Run Basin (8.83 square miles) to evaluate the transfer value of model parameters. The results of this investigation provide information to the Pennsylvania Department of Environmental Resources, Bureau of Mining and Reclamation, regarding the value of the simulated hydrologic data for use in cumulative hydrologic-impact assessments of surface-mined basins. The calibration period was October 1, 1985, through September 30, 1988 (water years 1986-88). The simulated data were representative of the observed data from the Stony Fork Basin. Mean simulated streamflow was 1.64 cubic feet per second compared to measured streamflow of 1.58 cubic feet per second for the 3-year period. The difference between the observed and simulated peak stormflow ranged from 4.0 to 59.7 percent for 12 storms. The simulated sediment load for the 1987 water year was 127.14 tons (0.21 ton per acre), which compares to a measured sediment load of 147.09 tons (0.25 ton per acre). The total simulated suspended-sediment load for the 3-year period was 538.2 tons (0.30 ton per acre per year), which compares to a measured sediment load of 467.61 tons (0.26 ton per acre per year). The model was verified by comparing observed and simulated data from October 1, 1988, through September 30, 1989. The results obtained were comparable to those from the calibration period. The simulated mean daily discharge was representative of the range of data observed from the basin and of the frequency with which specific discharges were equalled or exceeded. The calibrated and verified parameters from the Stony Fork model were applied to an HSPF model of the Poplar Run Basin. The two basins are in a similar physical setting. Data from October 1, 1987, through September 30, 1989, were used to evaluate the Poplar Run model. In general, the results from the Poplar Run model were comparable to those obtained from the Stony Fork model. The difference between observed and simulated total streamflow was 1.1 percent for the 2-year period. The mean annual streamflow simulated by the Poplar Run model was 18.3 cubic feet per second. This compares to an observed streamflow of 18.15 cubic feet per second. For the 2-year period, the simulated sediment load was 2,754 tons (0.24 ton per acre per year), which compares to a measured sediment load of 3,051.2 tons (0.27 ton per acre per year) for the Poplar Run Basin. Cumulative frequency-distribution curves of the observed and simulated streamflow compared well. The comparison between observed and simulated data improved as the time span increased. Simulated annual means and totals were more representative of the observed data than hourly data used in comparing storm events. The structure and organization of the HSPF model facilitated the simulation of a wide range of hydrologic processes. The simulation results from this investigation indicate that model parameters may be transferred to ungaged basins to generate representative hydrologic data through modeling techniques.

  7. Emerging from the bottleneck: benefits of the comparative approach to modern neuroscience.

    PubMed

    Brenowitz, Eliot A; Zakon, Harold H

    2015-05-01

    Neuroscience has historically exploited a wide diversity of animal taxa. Recently, however, research has focused increasingly on a few model species. This trend has accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs that are often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Editorial: Cognitive Architectures, Model Comparison and AGI

    NASA Astrophysics Data System (ADS)

    Lebiere, Christian; Gonzalez, Cleotilde; Warwick, Walter

    2010-12-01

    Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.

  9. A comparison of administrative and physiologic predictive models in determining risk adjusted mortality rates in critically ill patients.

    PubMed

    Enfield, Kyle B; Schafer, Katherine; Zlupko, Mike; Herasevich, Vitaly; Novicoff, Wendy M; Gajic, Ognjen; Hoke, Tracey R; Truwit, Jonathon D

    2012-01-01

    Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models.

  10. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  11. Using High-Resolution Satellite Observations for Evaluation of Cloud and Precipitation Statistics from Cloud-Resolving Model Simulations. Part I: South China Sea Monsoon Experiment

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Hou, A.; Lau, W. K.; Shie, C.; Tao, W.; Lin, X.; Chou, M.; Olson, W. S.; Grecu, M.

    2006-05-01

    The cloud and precipitation statistics simulated by 3D Goddard Cumulus Ensemble (GCE) model during the South China Sea Monsoon Experiment (SCSMEX) is compared with Tropical Rainfall Measuring Mission (TRMM) TMI and PR rainfall measurements and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) radiation and cloud retrievals. It is found that GCE is capable of simulating major convective system development and reproducing total surface rainfall amount as compared with rainfall estimated from the soundings. Mesoscale organization is adequately simulated except when environmental wind shear is very weak. The partitions between convective and stratiform rain are also close to TMI and PR classification. However, the model simulated rain spectrum is quite different from either TMI or PR measurements. The model produces more heavy rains and light rains (less than 0.1 mm/hr) than the observations. The model also produces heavier vertical hydrometer profiles of rain, graupel when compared with TMI retrievals and PR radar reflectivity. Comparing GCE simulated OLR and cloud properties with CERES measurements found that the model has much larger domain averaged OLR due to smaller total cloud fraction and a much skewed distribution of OLR and cloud top than CERES observations, indicating that the model's cloud field is not wide spread, consistent with the model's precipitation activity. These results will be used as guidance for improving the model's microphysics.

  12. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  13. [Effect of Cordyceps sinensis powder on renal oxidative stress and mitochondria functions in 5/6 nephrectomized rats].

    PubMed

    Zhang, Ming-hui; Pan, Ming-ming; Ni, Hai-feng; Chen, Jun-feng; Xu, Mn; Gong, Yu-xiang; Chen, Ping-sheng; Liu, Bi-cheng

    2015-04-01

    To observe the effect of Cordyceps sinensis (CS) powder on renal oxidative stress and mitochondria functions in 5/6 nephrectomized rats, and to primarily explore its possible mechanisms. Totally 30 male Sprague-Dawley rats were divided into the sham-operation group, the model group, and the treatment group by random digit table, 10 in each group. A chronic kidney disease (CKD) rat model was prepared by one step 5/6 nephrectomy. Rats in the treatment group were intragastrically administered with CS powder solution at the daily dose of 2 g/kg, once per day. Equal volume of double distilled water was intragastrically administered to rats in the sham-operation group and the model group. All medication lasted for 12 weeks. The general condition of rats, their body weight, blood pressure, 24 h proteinuria, urinary N-acetyl-β-D-glucosaminidase (NAG), serum creatinine (SCr) , and blood urea nitrogen (BUN) were assessed before surgery, at week 2, 4, 6, 8, 10, and 10 after surgery. Pathological changes of renal tissues were observed under light microscope. Morphological changes of mitochondria in renal tubular epithelial cells were observed under transmission electron microscope. Activities of antioxidant enzymes including reduced glutathione (GSH), manganese superoxide dismutase (MnSOD), and malondialdehyde (MDA) in fresh renal tissue homogenate were detected. Mitochondria of renal tissues were extracted to detect levels of mitochondrial membrane potential and changes of reactive oxygen species (ROS). And expressions of cytochrome-C (Cyto-C) and prohibitin in both mitochondria and cytoplasm of the renal cortex were also measured by Western blot. (1) Compared with the sham-operation group, body weight was significantly decreased at week 2 (P <0. 01), but blood pressure increased at week 4 (P <0. 05) in the model group. Compared with the model group, body weight was significantly increased at week 12 (P <0. 01), but blood pressure decreased at week 8 (P < 0. 01) in the treatment group. (2) Compared with the sham-operation group, 24 h proteinuria, urinary NAG, blood SCr and BUN significantly increased in the model group (all P <0. 01). Compared with the model group, blood and urinary biochemical indices all significantly decreased in the treatment group (all P <0. 01). (3) Results of pathological renal scoring: Glomerular sclerosis index, scoring for tubulointerstitial fibrosis, degree of tubulointerstitial inflammatory infiltration were all obviously higher in the model group than in the sham-operation group (all P <0. 01). All the aforesaid indices were more obviously improved in the treatment group than in the model group (all P <0. 01). (4) Compared with the sham-operation group, activities of MnSOD and GSH-Px were significantly reduced, but MDA contents obviously increased in the renal cortex of the model group (all P <0. 01). Compared with the model group, activities of MnSOD and GSH-Px obviously increased (P <0. 05, P <0. 01), but MDA contents obviously decreased in the renal cortex of the treatment group (P <0. 01). (5) Compared with the sham-operation group, the mitochondrial membrane potential significantly decreased, but ROS levels significantly increased in the model group (all P <0.01). Compared with the model group, mitochondrial transmembrane potential increased in the treatment group, thereby inhibiting the tendency of increased production of ROS (both P < 0. 01). (6) Results of Western blot showed that, compared with the sham-operation group, expression levels of mitochondrial Cyto-C and Prohibitin were significantly reduced in the renal cortex (P <0. 01), but significantly elevated in the cytoplasm of the model group (P <0. 01). Compared with the model group, each index was obviously improved in the treatment group with statistical difference (P <0. 05, P <0. 01). CS powder had renal protection, and its mechanism might partially depend on in- hibition of oxidative stress and protection for mitochondria.

  14. An improved approach to infer protein-protein interaction based on a hierarchical vector space model.

    PubMed

    Zhang, Jiongmin; Jia, Ke; Jia, Jinmeng; Qian, Ying

    2018-04-27

    Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM .

  15. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  16. Effect of heating rate and kinetic model selection on activation energy of nonisothermal crystallization of amorphous felodipine.

    PubMed

    Chattoraj, Sayantan; Bhugra, Chandan; Li, Zheng Jane; Sun, Changquan Calvin

    2014-12-01

    The nonisothermal crystallization kinetics of amorphous materials is routinely analyzed by statistically fitting the crystallization data to kinetic models. In this work, we systematically evaluate how the model-dependent crystallization kinetics is impacted by variations in the heating rate and the selection of the kinetic model, two key factors that can lead to significant differences in the crystallization activation energy (Ea ) of an amorphous material. Using amorphous felodipine, we show that the Ea decreases with increase in the heating rate, irrespective of the kinetic model evaluated in this work. The model that best describes the crystallization phenomenon cannot be identified readily through the statistical fitting approach because several kinetic models yield comparable R(2) . Here, we propose an alternate paired model-fitting model-free (PMFMF) approach for identifying the most suitable kinetic model, where Ea obtained from model-dependent kinetics is compared with those obtained from model-free kinetics. The most suitable kinetic model is identified as the one that yields Ea values comparable with the model-free kinetics. Through this PMFMF approach, nucleation and growth is identified as the main mechanism that controls the crystallization kinetics of felodipine. Using this PMFMF approach, we further demonstrate that crystallization mechanism from amorphous phase varies with heating rate. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  17. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    NASA Astrophysics Data System (ADS)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  18. The Bruton Tyrosine Kinase (BTK) Inhibitor Acalabrutinib Demonstrates Potent On-Target Effects and Efficacy in Two Mouse Models of Chronic Lymphocytic Leukemia.

    PubMed

    Herman, Sarah E M; Montraveta, Arnau; Niemann, Carsten U; Mora-Jensen, Helena; Gulrajani, Michael; Krantz, Fanny; Mantel, Rose; Smith, Lisa L; McClanahan, Fabienne; Harrington, Bonnie K; Colomer, Dolors; Covey, Todd; Byrd, John C; Izumi, Raquel; Kaptein, Allard; Ulrich, Roger; Johnson, Amy J; Lannutti, Brian J; Wiestner, Adrian; Woyach, Jennifer A

    2017-06-01

    Purpose: Acalabrutinib (ACP-196) is a novel, potent, and highly selective Bruton tyrosine kinase (BTK) inhibitor, which binds covalently to Cys481 in the ATP-binding pocket of BTK. We sought to evaluate the antitumor effects of acalabrutinib treatment in two established mouse models of chronic lymphocytic leukemia (CLL). Experimental Design: Two distinct mouse models were used, the TCL1 adoptive transfer model where leukemic cells from Eμ-TCL1 transgenic mice are transplanted into C57BL/6 mice, and the human NSG primary CLL xenograft model. Mice received either vehicle or acalabrutinib formulated into the drinking water. Results: Utilizing biochemical assays, we demonstrate that acalabrutinib is a highly selective BTK inhibitor as compared with ibrutinib. In the human CLL NSG xenograft model, treatment with acalabrutinib demonstrated on-target effects, including decreased phosphorylation of PLCγ2, ERK, and significant inhibition of CLL cell proliferation. Furthermore, tumor burden in the spleen of the mice treated with acalabrutinib was significantly decreased compared with vehicle-treated mice. Similarly, in the TCL1 adoptive transfer model, decreased phosphorylation of BTK, PLCγ2, and S6 was observed. Most notably, treatment with acalabrutinib resulted in a significant increase in survival compared with mice receiving vehicle. Conclusions: Treatment with acalabrutinib potently inhibits BTK in vivo , leading to on-target decreases in the activation of key signaling molecules (including BTK, PLCγ2, S6, and ERK). In two complementary mouse models of CLL, acalabrutinib significantly reduced tumor burden and increased survival compared with vehicle treatment. Overall, acalabrutinib showed increased BTK selectivity compared with ibrutinib while demonstrating significant antitumor efficacy in vivo on par with ibrutinib. Clin Cancer Res; 23(11); 2831-41. ©2016 AACR . ©2016 American Association for Cancer Research.

  19. The Bruton’s tyrosine kinase (BTK) inhibitor acalabrutinib demonstrates potent on-target effects and efficacy in two mouse models of chronic lymphocytic leukemia

    PubMed Central

    Herman, Sarah E. M.; Montraveta, Arnau; Niemann, Carsten U.; Mora-Jensen, Helena; Gulrajani, Michael; Krantz, Fanny; Mantel, Rose; Smith, Lisa L.; McClanahan, Fabienne; Harrington, Bonnie K.; Colomer, Dolors; Covey, Todd; Byrd, John C.; Izumi, Raquel; Kaptein, Allard; Ulrich, Roger; Johnson, Amy J.; Lannutti, Brian J.; Wiestner, Adrian; Woyach, Jennifer A.

    2017-01-01

    Purpose Acalabrutinib (ACP-196) is a novel, potent, and highly selective BTK inhibitor, which binds covalently to Cys481 in the ATP-binding pocket of BTK. We sought to evaluate the anti-tumor effects of acalabrutinib treatment in two established mouse models of chronic lymphocytic leukemia (CLL). Experimental Design Two distinct mouse models were used, the TCL1 adoptive transfer model where leukemic cells from Eμ-TCL1 transgenic mice are transplanted into C57BL/6 mice, and the human NSG primary CLL xenograft model. Mice received either vehicle or acalabrutinib formulated into the drinking water. Results Utilizing biochemical assays we demonstrate that acalabrutinib is a highly selective BTK inhibitor as compared to ibrutinib. In the human CLL NSG xenograft model, treatment with acalabrutinib demonstrated on-target effects including decreased phosphorylation of PLCγ2, ERK and significant inhibition of CLL cell proliferation. Further, tumor burden in the spleen of the mice treated with acalabrutinib was significantly decreased compared to vehicle treated mice. Similarly, in the TCL1 adoptive transfer model, decreased phosphorylation of BTK, PLCγ2 and S6 was observed. Most notably, treatment with acalabrutinib resulted in a significant increase in survival compared to mice receiving vehicle. Conclusions Treatment with acalabrutinib potently inhibits BTK in vivo, leading to on-target decreases in the activation of key signaling molecules (including BTK, PLCγ2, S6 and ERK). In two complementary mouse models of CLL acalabrutinib significantly reduced tumor burden and increased survival compared to vehicle treatment. Overall, acalabrutinib showed increased BTK selectivity compared to ibrutinib while demonstrating significant anti-tumor efficacy in vivo on par with ibrutinib. PMID:27903679

  20. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  1. Development and validation of a two-dimensional fast-response flood estimation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R; Mcpherson, Timothy N; Burian, Steven J

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less

  2. Evaluating vaccination strategies to control foot-and-mouth disease: a model comparison study.

    PubMed

    Roche, S E; Garner, M G; Sanson, R L; Cook, C; Birch, C; Backer, J A; Dube, C; Patyk, K A; Stevenson, M A; Yu, Z D; Rawdon, T G; Gauntlett, F

    2015-04-01

    Simulation models can offer valuable insights into the effectiveness of different control strategies and act as important decision support tools when comparing and evaluating outbreak scenarios and control strategies. An international modelling study was performed to compare a range of vaccination strategies in the control of foot-and-mouth disease (FMD). Modelling groups from five countries (Australia, New Zealand, USA, UK, The Netherlands) participated in the study. Vaccination is increasingly being recognized as a potentially important tool in the control of FMD, although there is considerable uncertainty as to how and when it should be used. We sought to compare model outputs and assess the effectiveness of different vaccination strategies in the control of FMD. Using a standardized outbreak scenario based on data from an FMD exercise in the UK in 2010, the study showed general agreement between respective models in terms of the effectiveness of vaccination. Under the scenario assumptions, all models demonstrated that vaccination with 'stamping-out' of infected premises led to a significant reduction in predicted epidemic size and duration compared to the 'stamping-out' strategy alone. For all models there were advantages in vaccinating cattle-only rather than all species, using 3-km vaccination rings immediately around infected premises, and starting vaccination earlier in the control programme. This study has shown that certain vaccination strategies are robust even to substantial differences in model configurations. This result should increase end-user confidence in conclusions drawn from model outputs. These results can be used to support and develop effective policies for FMD control.

  3. TESTING STELLAR POPULATION SYNTHESIS MODELS WITH SLOAN DIGITAL SKY SURVEY COLORS OF M31's GLOBULAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peacock, Mark B.; Zepf, Stephen E.; Maccarone, Thomas J.

    2011-08-10

    Accurate stellar population synthesis models are vital in understanding the properties and formation histories of galaxies. In order to calibrate and test the reliability of these models, they are often compared with observations of star clusters. However, relatively little work has compared these models in the ugriz filters, despite the recent widespread use of this filter set. In this paper, we compare the integrated colors of globular clusters in the Sloan Digital Sky Survey (SDSS) with those predicted from commonly used simple stellar population (SSP) models. The colors are based on SDSS observations of M31's clusters and provide the largestmore » population of star clusters with accurate photometry available from the survey. As such, it is a unique sample with which to compare SSP models with SDSS observations. From this work, we identify a significant offset between the SSP models and the clusters' g - r colors, with the models predicting colors which are too red by g - r {approx} 0.1. This finding is consistent with previous observations of luminous red galaxies in the SDSS, which show a similar discrepancy. The identification of this offset in globular clusters suggests that it is very unlikely to be due to a minority population of young stars. The recently updated SSP model of Maraston and Stroembaeck better represents the observed g - r colors. This model is based on the empirical MILES stellar library, rather than theoretical libraries, suggesting an explanation for the g - r discrepancy.« less

  4. Response of automated tow placed laminates to stress concentrations

    NASA Technical Reports Server (NTRS)

    Cairns, Douglas S.; Ilcewicz, Larry B.; Walker, Tom

    1993-01-01

    In this study, the response of laminates with stress concentrations is explored. Automated Tow Placed (ATP, also known as Fiber Placement) laminates are compared to conventional tape layup manufacturing. Previous tensile fracture tests on fiber placed laminates show an improvement in tensile fracture of large notches over 20 percent compared to tape layup laminates. A hierarchial modeling scheme is presented. In this scheme, a global model is developed for laminates with notches. A local model is developed to study the influence of inhomogeneities at the notch tip, which are a consequence of the fiber placement manufacturing technique. In addition, a stacked membrane model was developed to study delaminations and splitting on a ply-by-ply basis. The results indicate that some benefit with respect to tensile fracture (up to 11 percent) can be gained from inhomogeneity alone, but that the most improvement may be obtained with splitting and delaminations which are more severe in the case of fiber placement compared to tape layup. Improvements up to 36 percent were found from the model for fiber placed laminates with damage at the notch tip compared to conventional tape layup.

  5. Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model

    PubMed Central

    Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P.; Terrace, Herbert S.

    2015-01-01

    Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort’s success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models. PMID:26407227

  6. Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.

    PubMed

    Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P; Terrace, Herbert S

    2015-01-01

    Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.

  7. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  8. A parimutuel gambling perspective to compare probabilistic seismicity forecasts

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2014-10-01

    Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.

  9. Material Models for the Human Torso Finite Element Model

    DTIC Science & Technology

    2018-04-04

    material characterizations drawn from current literature. Biofidelity of the ARL torso was determined by comparing peak force, force-displacement, peak...Flesh simulation. The soft tissue mesh in the upper neck was highly distorted at 21.2 ms (right) compared to the original mesh (left...a realistic response with results comparable to physical experiments to support future efforts to evaluate BABT. 2. Methods 2.1 Review of

  10. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  11. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  12. A comparison of peer video modeling and self video modeling to teach textual responses in children with autism.

    PubMed

    Marcus, Alonna; Wilder, David A

    2009-01-01

    Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism.

  13. Tests of Si(111)-7 × 7 structural models by comparison with transmission electron diffraction patterns

    NASA Astrophysics Data System (ADS)

    McRae, E. G.; Petroff, P. M.

    1984-11-01

    Several structural models of the Si(111)-7 × 7 surface are tested by comparing calculated and observed transmission electron diffraction (TED) patterns. The models comprise "adatom" models where the unit mesh contains 12 adatoms or atom clusters in a locally (2 × 2) arrangement, and "triangle-dimer" models where the unit mesh contains 9 dimers or pairs of dimers bordering a triangular subunit of the unit mesh. The distribution of diffraction intensity among fractional-order spots is calculated kinematically and compared with TED patterns observed by Petroff and Wilson and others. No agreement is found for adatom models. Good but not perfect agreement is found for one triangle-dimer model.

  14. Determination of Semivariogram Models to Krige Hourly and Daily Solar Irradiance in Western Nebraska(.

    NASA Astrophysics Data System (ADS)

    Merino, G. G.; Jones, D.; Stooksbury, D. E.; Hubbard, K. G.

    2001-06-01

    In this paper, linear and spherical semivariogram models were determined for use in kriging hourly and daily solar irradiation for every season of the year. The data used to generate the models were from 18 weather stations in western Nebraska. The models generated were tested using cross validation. The performance of the spherical and linear semivariogram models were compared with each other and also with the semivariogram models based on the best fit to the sample semivariogram of a particular day or hour. There were no significant differences in the performance of the three models. This result and the comparable errors produced by the models in kriging indicated that the linear and spherical models could be used to perform kriging at any hour and day of the year without deriving an individual semivariogram model for that day or hour.The seasonal mean absolute errors associated with kriging, within the network, when using the spherical or the linear semivariograms models were between 10% and 13% of the mean irradiation for daily irradiation and between 12% and 20% for hourly irradiation. These errors represent an improvement of 1%-2% when compared with replacing data at a given site with the data of the nearest weather station.

  15. Differences between the MEMLS and the multiple-layer HUT model and their comparisons with in-situ snowpack observations

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Sandells, M. J.; Lemmetyinen, J.; Kim, E. J.

    2013-12-01

    Application of passive microwave (PM) brightness temperature for snow water equivalent retrieval requires deep understanding of snow emission models, not only for their performance to reproduce in-situ PM observations, but also for their theoretical differences to approximate radiative transfer theory. In this paper, differences between the multiple-layer HUT (or TKK) model and the Microwave Emission Model of Layered Snowpacks (MEMLS) were listed, and the two models were compared with snow ground-based PM observations at Streamboat Springs, Colorado, USA; Churchill, Canada; and Sodankyla, Finland. The two models were chosen for their multiple-layer schemes are close to actual layer-by-layer snow measurements. Both the two models are semi-empirical models; whereas the HUT model uses the mean snow grain size, MEMLS uses the correlation length to relate the snow microstructure with the scattering coefficients. The two parameters are related according to previous studies. The Specific Surface Area (SSA) was measured at three test sites to derive the correlation length, while the mean snow grain sizes was available at Stream Springs and Sodankyla. It was shown that with different apparent forms of radiative transfer equations, the different parts of the two models have one-to-one correspondence however, and intermediate parameters are comparable. Regarding the multiple-layer structure of the models, it was found that the HUT model considers the internal reflectivity of each snow layer to be zero. The two-flux radiative transfer equations of the two models were compared, and the correspondence of the semi-empirical parameter q in the HUT model was found in the MEMLS. The effect of consideration of transverse radiation scattered into the direction under consideration via the six-flux approximation in MEMLS is compared. Based on model comparisons, we analyzed the differences of TB predictions at the three test sites.

  16. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis.

    PubMed

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D'Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-12-27

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin's lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman's rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (Vx(%)), a two-variable NTCP model including V30(%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (Vx(cc)) was analyzed, an NTCP model based on 3 variables including V30(cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760-0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an external cohort.

  17. A comparison of the performances of an artificial neural network and a regression model for GFR estimation.

    PubMed

    Liu, Xun; Li, Ning-shan; Lv, Lin-sheng; Huang, Jian-hua; Tang, Hua; Chen, Jin-xia; Ma, Hui-juan; Wu, Xiao-ming; Lou, Tan-qi

    2013-12-01

    Accurate estimation of glomerular filtration rate (GFR) is important in clinical practice. Current models derived from regression are limited by the imprecision of GFR estimates. We hypothesized that an artificial neural network (ANN) might improve the precision of GFR estimates. A study of diagnostic test accuracy. 1,230 patients with chronic kidney disease were enrolled, including the development cohort (n=581), internal validation cohort (n=278), and external validation cohort (n=371). Estimated GFR (eGFR) using a new ANN model and a new regression model using age, sex, and standardized serum creatinine level derived in the development and internal validation cohort, and the CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) 2009 creatinine equation. Measured GFR (mGFR). GFR was measured using a diethylenetriaminepentaacetic acid renal dynamic imaging method. Serum creatinine was measured with an enzymatic method traceable to isotope-dilution mass spectrometry. In the external validation cohort, mean mGFR was 49±27 (SD) mL/min/1.73 m2 and biases (median difference between mGFR and eGFR) for the CKD-EPI, new regression, and new ANN models were 0.4, 1.5, and -0.5 mL/min/1.73 m2, respectively (P<0.001 and P=0.02 compared to CKD-EPI and P<0.001 comparing the new regression and ANN models). Precisions (IQRs for the difference) were 22.6, 14.9, and 15.6 mL/min/1.73 m2, respectively (P<0.001 for both compared to CKD-EPI and P<0.001 comparing the new ANN and new regression models). Accuracies (proportions of eGFRs not deviating >30% from mGFR) were 50.9%, 77.4%, and 78.7%, respectively (P<0.001 for both compared to CKD-EPI and P=0.5 comparing the new ANN and new regression models). Different methods for measuring GFR were a source of systematic bias in comparisons of new models to CKD-EPI, and both the derivation and validation cohorts consisted of a group of patients who were referred to the same institution. An ANN model using 3 variables did not perform better than a new regression model. Whether ANN can improve GFR estimation using more variables requires further investigation. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  18. Comparing live and remote models in eating conformity research.

    PubMed

    Feeney, Justin R; Polivy, Janet; Pliner, Patricia; Sullivan, Margot D

    2011-01-01

    Research demonstrates that people conform to how much other people eat. This conformity occurs in the presence of other people (live model) and when people view information about how much food prior participants ate (remote models). The assumption in the literature has been that remote models produce a similar effect to live models, but this has never been tested. To investigate this issue, we randomly paired participants with a live or remote model and compared their eating to those who ate alone. We found that participants exposed to both types of model differed significantly from those in the control group, but there was no significant difference between the two modeling procedures. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  19. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    PubMed

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  20. 3-way Networks: Application of Hypergraphs for Modelling Increased Complexity in Comparative Genomics

    DOE PAGES

    Weighill, Deborah A.; Jacobson, Daniel A.

    2015-03-27

    Herein we present and develop the theory of 3-way networks, a type of hypergraph in which each edge models relationships between triplets of objects as opposed to pairs of objects as done by standard network models. We explore approaches of how to prune these 3-way networks, illustrate their utility in comparative genomics and demonstrate how they find relationships which would be missed by standard 2-way network models using a phylogenomic dataset of 211 bacterial genomes.

  1. Differential Cross Sections for Proton-Proton Elastic Scattering

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Dick, Frank; Norbury, John W.; Blattnig, Steve R.

    2009-01-01

    Proton-proton elastic scattering is investigated within the framework of the one pion exchange model in an attempt to model nucleon-nucleon interactions spanning the large range of energies important to cosmic ray shielding. A quantum field theoretic calculation is used to compute both differential and total cross sections. A scalar theory is then presented and compared to the one pion exchange model. The theoretical cross sections are compared to proton-proton scattering data to determine the validity of the models.

  2. Modeling Pediatric Brain Trauma: Piglet Model of Controlled Cortical Impact.

    PubMed

    Pareja, Jennifer C Munoz; Keeley, Kristen; Duhaime, Ann-Christine; Dodge, Carter P

    2016-01-01

    The brain has different responses to traumatic injury as a function of its developmental stage. As a model of injury to the immature brain, the piglet shares numerous similarities in regards to morphology and neurodevelopmental sequence compared to humans. This chapter describes a piglet scaled focal contusion model of traumatic brain injury that accounts for the changes in mass and morphology of the brain as it matures, facilitating the study of age-dependent differences in response to a comparable mechanical trauma.

  3. 3-way Networks: Application of Hypergraphs for Modelling Increased Complexity in Comparative Genomics

    PubMed Central

    Weighill, Deborah A; Jacobson, Daniel A

    2015-01-01

    We present and develop the theory of 3-way networks, a type of hypergraph in which each edge models relationships between triplets of objects as opposed to pairs of objects as done by standard network models. We explore approaches of how to prune these 3-way networks, illustrate their utility in comparative genomics and demonstrate how they find relationships which would be missed by standard 2-way network models using a phylogenomic dataset of 211 bacterial genomes. PMID:25815802

  4. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  5. An assessment of the trophic structure of the Bay of Biscay continental shelf food web: Comparing estimates derived from an ecosystem model and isotopic data

    NASA Astrophysics Data System (ADS)

    Lassalle, G.; Chouvelon, T.; Bustamante, P.; Niquil, N.

    2014-01-01

    Comparing outputs of ecosystem models with estimates derived from experimental and observational approaches is important in creating valuable feedback for model construction, analyses and validation. Stable isotopes and mass-balanced trophic models are well-known and widely used as approximations to describe the structure of food webs, but their consistency has not been properly established as attempts to compare these methods remain scarce. Model construction is a data-consuming step, meaning independent sets for validation are rare. Trophic linkages in the French continental shelf of the Bay of Biscay food webs were recently investigated using both methodologies. Trophic levels for mono-specific compartments representing small pelagic fish and marine mammals and multi-species functional groups corresponding to demersal fish and cephalopods, derived from modelling, were compared with trophic levels calculated from independent carbon and nitrogen isotope ratios. Estimates of the trophic niche width of those species, or groups of species, were compared between these two approaches as well. A significant and close-to-one positive (rSpearman2 = 0.72 , n = 16, p < 0.0001) correlation was found between trophic levels estimated by Ecopath modelling and those derived from isotopic signatures. Differences between estimates were particularly low for mono-specific compartments. No clear relationship existed between indices of trophic niche width derived from both methods. Given the wide recognition of trophic levels as a useful concept in ecosystem-based fisheries management, propositions were made to further combine these two approaches.

  6. An Analytical Quantum Model to Calculate Fluorescence Enhancement of a Molecule in Vicinity of a Sub-10 nm Metal Nanoparticle.

    PubMed

    Bagheri, Zahra; Massudi, Reza

    2017-05-01

    An analytical quantum model is used to calculate electrical permittivity of a metal nanoparticle located in an adjacent molecule. Different parameters, such as radiative and non-radiative decay rates, quantum yield, electrical field enhancement factor, and fluorescence enhancement are calculated by such a model and they are compared with those obtained by using the classical Drude model. It is observed that using an analytical quantum model presents a higher enhancement factor, up to 30%, as compared to classical model for nanoparticles smaller than 10 nm. Furthermore, the results are in better agreement with those experimentally realized.

  7. Comparison of linear and square superposition hardening models for the surface nanoindentation of ion-irradiated materials

    NASA Astrophysics Data System (ADS)

    Xiao, Xiazi; Yu, Long

    2018-05-01

    Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.

  8. Grain Floatation During Equiaxed Solidification of an Al-Cu Alloy in a Side-Cooled Cavity: Part II—Numerical Studies

    NASA Astrophysics Data System (ADS)

    Kumar, Arvind; Walker, Mike J.; Sundarraj, Suresh; Dutta, Pradip

    2011-08-01

    In this article, a single-phase, one-domain macroscopic model is developed for studying binary alloy solidification with moving equiaxed solid phase, along with the associated transport phenomena. In this model, issues such as thermosolutal convection, motion of solid phase relative to liquid and viscosity variations of the solid-liquid mixture with solid fraction in the mobile zone are taken into account. Using the model, the associated transport phenomena during solidification of Al-Cu alloys in a rectangular cavity are predicted. The results for temperature variation, segregation patterns, and eutectic fraction distribution are compared with data from in-house experiments. The model predictions compare well with the experimental results. To highlight the influence of solid phase movement on convection and final macrosegregation, the results of the current model are also compared with those obtained from the conventional solidification model with stationary solid phase. By including the independent movement of the solid phase into the fluid transport model, better predictions of macrosegregation, microstructure, and even shrinkage locations were obtained. Mechanical property prediction models based on microstructure will benefit from the improved accuracy of this model.

  9. Pore-scale modeling of capillary trapping in water-wet porous media: A new cooperative pore-body filling model

    NASA Astrophysics Data System (ADS)

    Ruspini, L. C.; Farokhpoor, R.; Øren, P. E.

    2017-10-01

    We present a pore-network model study of capillary trapping in water-wet porous media. The amount and distribution of trapped non-wetting phase is determined by the competition between two trapping mechanisms - snap-off and cooperative pore-body filling. We develop a new model to describe the pore-body filling mechanism in geologically realistic pore-networks. The model accounts for the geometrical characteristics of the pore, the spatial location of the connecting throats and the local fluid topology at the time of the displacement. We validate the model by comparing computed capillary trapping curves with published data for four different water-wet rocks. Computations are performed on pore-networks extracted from micro-CT images and process-based reconstructions of the actual rocks used in the experiments. Compared with commonly used stochastic models, the new model describes more accurately the experimental measurements, especially for well connected porous systems where trapping is controlled by subtleties of the pore structure. The new model successfully predicts relative permeabilities and residual saturation for Bentheimer sandstone using in-situ measured contact angles as input to the simulations. The simulated trapped cluster size distributions are compared with predictions from percolation theory.

  10. Enhancing prediction power of chemometric models through manipulation of the fed spectrophotometric data: A comparative study

    NASA Astrophysics Data System (ADS)

    Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed

    2016-10-01

    Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.

  11. Towards a physics-based multiscale modelling of the electro-mechanical coupling in electro-active polymers.

    PubMed

    Cohen, Noy; Menzel, Andreas; deBotton, Gal

    2016-02-01

    Owing to the increasing number of industrial applications of electro-active polymers (EAPs), there is a growing need for electromechanical models which accurately capture their behaviour. To this end, we compare the predicted behaviour of EAPs undergoing homogeneous deformations according to three electromechanical models. The first model is a phenomenological continuum-based model composed of the mechanical Gent model and a linear relationship between the electric field and the polarization. The electrical and the mechanical responses according to the second model are based on the physical structure of the polymer chain network. The third model incorporates a neo-Hookean mechanical response and a physically motivated microstructurally based long-chains model for the electrical behaviour. In the microstructural-motivated models, the integration from the microscopic to the macroscopic levels is accomplished by the micro-sphere technique. Four types of homogeneous boundary conditions are considered and the behaviours determined according to the three models are compared. For the microstructurally motivated models, these analyses are performed and compared with the widely used phenomenological model for the first time. Some of the aspects revealed in this investigation, such as the dependence of the intensity of the polarization field on the deformation, highlight the need for an in-depth investigation of the relationships between the structure and the behaviours of the EAPs at the microscopic level and their overall macroscopic response.

  12. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  13. Building a better methane generation model: Validating models with methane recovery rates from 35 Canadian landfills.

    PubMed

    Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E

    2009-07-01

    The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.

  14. Comparison of the Effects of Video Models with and without Verbal Cueing on Task Completion by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Collins, Terri S.

    2012-01-01

    This study compared the effects of video models with and without verbal cuing (voice over) on the completion of fine motor cooking related tasks by four young adults with moderate intellectual disability. The effects of the two modeling conditions were compared using an adapted alternating treatments design with an extended baseline, comparison,…

  15. Comparing field- and model-based standing dead tree carbon stock estimates across forests of the US

    Treesearch

    Chistopher W. Woodall; Grant M. Domke; David W. MacFarlane; Christopher M. Oswalt

    2012-01-01

    As signatories to the United Nation Framework Convention on Climate Change, the US has been estimating standing dead tree (SDT) carbon (C) stocks using a model based on live tree attributes. The USDA Forest Service began sampling SDTs nationwide in 1999. With comprehensive field data now available, the objective of this study was to compare field- and model-based...

  16. A study of frontal dynamics with application to the Australian summertime 'cool change'

    NASA Technical Reports Server (NTRS)

    Reeder, Michael J.; Smith, Roger K.

    1987-01-01

    The dynamics of frontal evolution is examined in terms of the Australian summertime cool change using a two-dimensional numerical model. The model is synthesized from observational data on surface cold fronts obtained during the Australian Cold Fronts Research Program, and the model develops a quasi-steady surface cold front during the 24 hours of integration. The characteristics of this model are compared with those of a kinematic model; it is observed that the features of the two models correspond. The two-dimensional and kinematic models are also compared with a 24-hour prediction of the cold front of February 1983 using the three-dimensional nested-grid model of the Australian Numerical Meteorology Research Center, developed by Gauntlett et al. (1984). Good correlation between these models is detected.

  17. Characteristics of Tropical Cyclones in High-Resolution Models of the Present Climate

    NASA Technical Reports Server (NTRS)

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffery A.; Kim, Daeyhun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Roberts, Malcolm J.; hide

    2014-01-01

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) in two types of experiments, using a climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.

  18. Verification and transfer of thermal pollution model. Volume 4: User's manual for three-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Nwadike, E. V.; Sinha, S. E.

    1982-01-01

    The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.

  19. Characteristics of Tropical Cyclones in High-resolution Models in the Present Climate

    NASA Technical Reports Server (NTRS)

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffrey A.; Kim, Daehyun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Reed, Kevin; hide

    2014-01-01

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.

  20. Dynamic analysis of rotor flex-structure based on nonlinear anisotropic shell models

    NASA Astrophysics Data System (ADS)

    Bauchau, Olivier A.; Chiang, Wuying

    1991-05-01

    In this paper an anisotropic shallow shell model is developed that accommodates transverse shearing deformations and arbitrarily large displacements and rotations, but strains are assumed to remain small. Two kinematic models are developed, the first using two DOF to locate the direction of the normal to the shell's midplane, the second using three. The latter model allows for an automatic compatibility of the shell model with beam models. The shell model is validated by comparing its predictions with several benchmark problems. In actual helicopter rotor blade problems, the shell model of the flex structure is shown to give very different results shown compared to beam models. The lead-lag and torsion modes in particular are strongly affected, whereas flapping modes seem to be less affected.

  1. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  2. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit.

    PubMed

    Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert

    2016-01-01

    Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.

  3. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  4. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward

    2014-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and compared against each other. Results show both models can be tuned to achieve results within 7% of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  5. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2015-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  6. Effective-Medium Models for Marine Gas Hydrates, Mallik Revisited

    NASA Astrophysics Data System (ADS)

    Terry, D. A.; Knapp, C. C.; Knapp, J. H.

    2011-12-01

    Hertz-Mindlin type effective-medium dry-rock elastic models have been commonly used for more than three decades in rock physics analysis, and recently have been applied to assessment of marine gas hydrate resources. Comparisons of several effective-medium models with derivative well-log data from the Mackenzie River Valley, Northwest Territories, Canada (i.e. Mallik 2L-38 and 5L-38) were made several years ago as part of a marine gas hydrate joint industry project in the Gulf of Mexico. The matrix/grain supporting model (one of the five models compared) was clearly a better representation of the Mallik data than the other four models (2 cemented sand models; a pore-filling model; and an inclusion model). Even though the matrix/grain supporting model was clearly better, reservations were noted that the compressional velocity of the model was higher than the compressional velocity measured via the sonic logs, and that the shear velocities showed an even greater discrepancy. Over more than thirty years, variations of Hertz-Mindlin type effective medium models have evolved for unconsolidated sediments and here, we briefly review their development. In the past few years, the perfectly smooth grain version of the Hertz-Mindlin type effective-medium model has been favored over the infinitely rough grain version compared in the Gulf of Mexico study. We revisit the data from the Mallik wells to review assertions that effective-medium models with perfectly smooth grains are a better predictor than models with infinitely rough grains. We briefly review three Hertz-Mindlin type effective-medium models, and standardize nomenclature and notation. To calibrate the extended effective-medium model in gas hydrates, we use a well accepted framework for unconsolidated sediments through Hashin-Shtrikman bounds. We implement the previously discussed effective-medium models for saturated sediments with gas hydrates and compute theoretical curves of seismic velocities versus gas hydrate saturation to compare with well log data available from the Canadian gas hydrates research site. By directly comparing the infinitely rough and perfectly smooth grain versions of the Hertz-Mindlin type effective-medium model, we provide additional insight to the discrepancies noted in the Gulf of Mexico study.

  7. An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).

    PubMed

    Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J

    2012-05-01

    Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.

  8. The comparative hydrodynamics of rapid rotation by predatory appendages.

    PubMed

    McHenry, M J; Anderson, P S L; Van Wassenbergh, S; Matthews, D G; Summers, A P; Patek, S N

    2016-11-01

    Countless aquatic animals rotate appendages through the water, yet fluid forces are typically modeled with translational motion. To elucidate the hydrodynamics of rotation, we analyzed the raptorial appendages of mantis shrimp (Stomatopoda) using a combination of flume experiments, mathematical modeling and phylogenetic comparative analyses. We found that computationally efficient blade-element models offered an accurate first-order approximation of drag, when compared with a more elaborate computational fluid-dynamic model. Taking advantage of this efficiency, we compared the hydrodynamics of the raptorial appendage in different species, including a newly measured spearing species, Coronis scolopendra The ultrafast appendages of a smasher species (Odontodactylus scyllarus) were an order of magnitude smaller, yet experienced values of drag-induced torque similar to those of a spearing species (Lysiosquillina maculata). The dactyl, a stabbing segment that can be opened at the distal end of the appendage, generated substantial additional drag in the smasher, but not in the spearer, which uses the segment to capture evasive prey. Phylogenetic comparative analyses revealed that larger mantis shrimp species strike more slowly, regardless of whether they smash or spear their prey. In summary, drag was minimally affected by shape, whereas size, speed and dactyl orientation dominated and differentiated the hydrodynamic forces across species and sizes. This study demonstrates the utility of simple mathematical modeling for comparative analyses and illustrates the multi-faceted consequences of drag during the evolutionary diversification of rotating appendages. © 2016. Published by The Company of Biologists Ltd.

  9. Planetary Boundary Layer Simulation Using TASS

    NASA Technical Reports Server (NTRS)

    Schowalter, David G.; DeCroix, David S.; Lin, Yuh-Lang; Arya, S. Pal; Kaplan, Michael

    1996-01-01

    Boundary conditions to an existing large-eddy simulation model have been changed in order to simulate turbulence in the atmospheric boundary layer. Several options are now available, including the use of a surface energy balance. In addition, we compare convective boundary layer simulations with the Wangara and Minnesota field experiments as well as with other model results. We find excellent agreement of modelled mean profiles of wind and temperature with observations and good agreement for velocity variances. Neutral boundary simulation results are compared with theory and with previously used models. Agreement with theory is reasonable, while agreement with previous models is excellent.

  10. Evaluation of the satellite derived snow cover area - Runoff forecasting models for the inaccessible basins of western Himalayas

    NASA Technical Reports Server (NTRS)

    Dey, B.

    1985-01-01

    In this study, the existing seasonal snow cover area runoff forecasting models of the Indus, Kabul, Sutlej and Chenab basins were evaluated with the concurrent flow correlation model for the period 1975-79. In all the basins under study, correlation of concurrent flow model explained the variability in flow better than by the snow cover area runoff models. Actually, the concurrent flow correlation model explained more than 90 percent of the variability in the flow of these rivers. Compared to this model, the snow cover area runoff models explained less of the variability in flow. In the Himalayan river basins under study and at least for the period under observation, the concurrent flow correlation model provided a set of results with which to compare the estimates from the snow cover area runoff models.

  11. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  12. Collaborative Modeling: Experience of the U.S. Preventive Services Task Force.

    PubMed

    Petitti, Diana B; Lin, Jennifer S; Owens, Douglas K; Croswell, Jennifer M; Feuer, Eric J

    2018-01-01

    Models can be valuable tools to address uncertainty, trade-offs, and preferences when trying to understand the effects of interventions. Availability of results from two or more independently developed models that examine the same question (comparative modeling) allows systematic exploration of differences between models and the effect of these differences on model findings. Guideline groups sometimes commission comparative modeling to support their recommendation process. In this commissioned collaborative modeling, modelers work with the people who are developing a recommendation or policy not only to define the questions to be addressed but ideally, work side-by-side with each other and with systematic reviewers to standardize selected inputs and incorporate selected common assumptions. This paper describes the use of commissioned collaborative modeling by the U.S. Preventive Services Task Force (USPSTF), highlighting the general challenges and opportunities encountered and specific challenges for some topics. It delineates other approaches to use modeling to support evidence-based recommendations and the many strengths of collaborative modeling compared with other approaches. Unlike systematic reviews prepared for the USPSTF, the commissioned collaborative modeling reports used by the USPSTF in making recommendations about screening have not been required to follow a common format, sometimes making it challenging to understand key model features. This paper presents a checklist developed to critically appraise commissioned collaborative modeling reports about cancer screening topics prepared for the USPSTF. Copyright © 2017 American Journal of Preventive Medicine. All rights reserved.

  13. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  14. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  15. Epidemiological characteristics of reported sporadic and outbreak cases of E. coli O157 in people from Alberta, Canada (2000-2002): methodological challenges of comparing clustered to unclustered data.

    PubMed

    Pearl, D L; Louie, M; Chui, L; Doré, K; Grimsrud, K M; Martin, S W; Michel, P; Svenson, L W; McEwen, S A

    2008-04-01

    Using multivariable models, we compared whether there were significant differences between reported outbreak and sporadic cases in terms of their sex, age, and mode and site of disease transmission. We also determined the potential role of administrative, temporal, and spatial factors within these models. We compared a variety of approaches to account for clustering of cases in outbreaks including weighted logistic regression, random effects models, general estimating equations, robust variance estimates, and the random selection of one case from each outbreak. Age and mode of transmission were the only epidemiologically and statistically significant covariates in our final models using the above approaches. Weighing observations in a logistic regression model by the inverse of their outbreak size appeared to be a relatively robust and valid means for modelling these data. Some analytical techniques, designed to account for clustering, had difficulty converging or producing realistic measures of association.

  16. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  17. Predicting Time to Hospital Discharge for Extremely Preterm Infants

    PubMed Central

    Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.

    2010-01-01

    As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430

  18. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  19. Modeling of Nitrogen Oxides Emissions from CFB Combustion

    NASA Astrophysics Data System (ADS)

    Kallio, S.; Keinonen, M.

    In this work, a simplified description of combustion and nitrogen oxides chemistry was implemented in a 1.5D model framework with the aim to compare the results with ones earlier obtained with a detailed reaction scheme. The simplified chemistry was written using 12 chemical components. Heterogeneous chemistry is given by the same models as in the earlier work but the homogeneous and catalytic reactions have been altered. The models have been taken from the literature. The paper describes the numerical model with emphasis on the chemistry submodels. A simulation of combustion of bituminous coal in the Chalmers 12 MW boiler is conducted and the results are compared with the results obtained earlier with the detailed chemistry description. The results are also compared with measured O2, CO, NO and N2O profiles. The simplified reaction scheme produces equally good results as earlier obtained with the more elaborate chemistry description.

  20. Comparison of the Utility of Two Assessments for Explaining and Predicting Productivity Change: Well-Being Versus an HRA.

    PubMed

    Gandy, William M; Coberley, Carter; Pope, James E; Rula, Elizabeth Y

    2016-01-01

    To compare utility of employee well-being to health risk assessment (HRA) as predictors of productivity change. Panel data from 2189 employees who completed surveys 2 years apart were used in hierarchical models comparing the influence of well-being and health risk on longitudinal changes in presenteeism and job performance. Absenteeism change was evaluated in a nonexempt subsample. Change in well-being was the most significant independent predictor of productivity change across all three measures. Comparing hierarchical models, well-being models performed significantly better than HRA models. The HRA added no incremental explanatory power over well-being in combined models. Alone, nonphysical health well-being components outperformed the HRA for all productivity measures. Well-being offers a more comprehensive measure of factors that influence productivity and can be considered preferential to HRA in understanding and addressing suboptimal productivity.

  1. Assessment of State-of-the-Art Dust Emission Scheme in GEOS

    NASA Technical Reports Server (NTRS)

    Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine

    2017-01-01

    The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.

  2. Cafeteria diet is a robust model of human metabolic syndrome with liver and adipose inflammation: comparison to high-fat diet.

    PubMed

    Sampey, Brante P; Vanhoose, Amanda M; Winfield, Helena M; Freemerman, Alex J; Muehlbauer, Michael J; Fueger, Patrick T; Newgard, Christopher B; Makowski, Liza

    2011-06-01

    Obesity has reached epidemic proportions worldwide and reports estimate that American children consume up to 25% of calories from snacks. Several animal models of obesity exist, but studies are lacking that compare high-fat diets (HFD) traditionally used in rodent models of diet-induced obesity (DIO) to diets consisting of food regularly consumed by humans, including high-salt, high-fat, low-fiber, energy dense foods such as cookies, chips, and processed meats. To investigate the obesogenic and inflammatory consequences of a cafeteria diet (CAF) compared to a lard-based 45% HFD in rodent models, male Wistar rats were fed HFD, CAF or chow control diets for 15 weeks. Body weight increased dramatically and remained significantly elevated in CAF-fed rats compared to all other diets. Glucose- and insulin-tolerance tests revealed that hyperinsulinemia, hyperglycemia, and glucose intolerance were exaggerated in the CAF-fed rats compared to controls and HFD-fed rats. It is well-established that macrophages infiltrate metabolic tissues at the onset of weight gain and directly contribute to inflammation, insulin resistance, and obesity. Although both high fat diets resulted in increased adiposity and hepatosteatosis, CAF-fed rats displayed remarkable inflammation in white fat, brown fat and liver compared to HFD and controls. In sum, the CAF provided a robust model of human metabolic syndrome compared to traditional lard-based HFD, creating a phenotype of exaggerated obesity with glucose intolerance and inflammation. This model provides a unique platform to study the biochemical, genomic and physiological mechanisms of obesity and obesity-related disease states that are pandemic in western civilization today.

  3. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  4. A comparison of walk-in counselling and the wait list model for delivering counselling services.

    PubMed

    Stalker, Carol A; Riemer, Manuel; Cait, Cheryl-Anne; Horton, Susan; Booton, Jocelyn; Josling, Leslie; Bedggood, Joanna; Zaczek, Margaret

    2016-10-01

    Walk-in counselling has been used to reduce wait times but there are few controlled studies to compare outcomes between walk-in and the traditional model of service delivery. To compare change in psychological distress by clients receiving services from two models of service delivery, a walk-in counselling model and a traditional counselling model involving a wait list. Mixed-methods sequential explanatory design including quantitative comparison of groups with one pre-test and two follow-ups, and qualitative analysis of interviews with a sub-sample. Five-hundred and twenty-four participants ≥16 years were recruited from two Family Counselling Agencies; the General Health Questionnaire-12 assessed change in psychological distress. Hierarchical linear modelling revealed clients of the walk-in model improved faster and were less distressed at the four-week follow-up compared to the traditional service delivery model. Ten weeks later, both groups had improved and were similar. Participants receiving instrumental services prior to baseline improved more slowly. The qualitative data confirmed participants highly valued the accessibility of the walk-in model, and were frustrated by the lengthy waits associated with the traditional model. This study improves methodologically on previous studies of walk-in counselling, an approach to service delivery not conducive to randomized controlled trials.

  5. Quantitative assessment of AOD from 17 CMIP5 models based on satellite-derived AOD over India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Misra, Amit; Kanawade, Vijay P.; Tripathi, Sachchida Nand

    Aerosol optical depth (AOD) values from 17 CMIP5 models are compared with Moderate Resolution Imaging Spectroradiometer (MODIS) and Multiangle Imaging Spectroradiometer (MISR) derived AODs over India. The objective is to identify the cases of successful AOD simulation by CMIP5 models, considering satellite-derived AOD as a benchmark. Six years of AOD data (2000–2005) from MISR and MODIS are processed to create quality-assured gridded AOD maps over India, which are compared with corresponding maps of 17 CMIP5 models at the same grid resolution. Intercomparison of model and satellite data shows that model-AOD is better correlated with MISR-derived AOD than MODIS. The correlation between model-AOD andmore » MISR-AOD is used to segregate the models into three categories identifying their performance in simulating the AOD over India. Maps of correlation between model-AOD and MISR-/MODIS-AOD are generated to provide quantitative information about the intercomparison. The two sets of data are examined for different seasons and years to examine the seasonal and interannual variation in the correlation coefficients. In conclusion, latitudinal and longitudinal variations in AOD as simulated by models are also examined and compared with corresponding variations observed by satellites.« less

  6. Quantitative assessment of AOD from 17 CMIP5 models based on satellite-derived AOD over India

    DOE PAGES

    Misra, Amit; Kanawade, Vijay P.; Tripathi, Sachchida Nand

    2016-08-03

    Aerosol optical depth (AOD) values from 17 CMIP5 models are compared with Moderate Resolution Imaging Spectroradiometer (MODIS) and Multiangle Imaging Spectroradiometer (MISR) derived AODs over India. The objective is to identify the cases of successful AOD simulation by CMIP5 models, considering satellite-derived AOD as a benchmark. Six years of AOD data (2000–2005) from MISR and MODIS are processed to create quality-assured gridded AOD maps over India, which are compared with corresponding maps of 17 CMIP5 models at the same grid resolution. Intercomparison of model and satellite data shows that model-AOD is better correlated with MISR-derived AOD than MODIS. The correlation between model-AOD andmore » MISR-AOD is used to segregate the models into three categories identifying their performance in simulating the AOD over India. Maps of correlation between model-AOD and MISR-/MODIS-AOD are generated to provide quantitative information about the intercomparison. The two sets of data are examined for different seasons and years to examine the seasonal and interannual variation in the correlation coefficients. In conclusion, latitudinal and longitudinal variations in AOD as simulated by models are also examined and compared with corresponding variations observed by satellites.« less

  7. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    PubMed

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  8. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  9. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images.

    PubMed

    Kim, Jooseong; Lagravére, Manuel O

    2016-01-01

    The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis.

  10. New segregation analysis of panic disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vieland, V.J.; Fyer, A.J.; Chapman, T.

    1996-04-09

    We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less

  11. Thermal injury models for optical treatment of biological tissues: a comparative study.

    PubMed

    Fanjul-Velez, Felix; Ortega-Quijano, Noe; Salas-Garcia, Irene; Arce-Diego, Jose L

    2010-01-01

    The interaction of optical radiation with biological tissues causes an increase in the temperature that, depending on its magnitude, can provoke a thermal injury process in the tissue. The establishment of laser irradiation pathological limits constitutes an essential task, as long as it enables to fix and delimit a range of parameters that ensure a safe treatment in laser therapies. These limits can be appropriately described by kinetic models of the damage processes. In this work, we present and compare several models for the study of thermal injury in biological tissues under optical illumination, particularly the Arrhenius thermal damage model and the thermal dosimetry model based on CEM (Cumulative Equivalent Minutes) 43°C. The basic concepts that link the temperature and exposition time with the tissue injury or cellular death are presented, and it will be shown that they enable to establish predictive models for the thermal damage in laser therapies. The results obtained by both models will be compared and discussed, highlighting the main advantages of each one and proposing the most adequate one for optical treatment of biological tissues.

  12. A Comparative Study: Completion of Fine Motor Office Related Tasks by High School Students with Autism Using Video Models on Large and Small Screen Sizes

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.

    2012-01-01

    The purpose of this investigation was to compare fine motor task completion when using video models presented on a smaller screen size (Personal Digital Assistant) compared to a larger laptop screen size. The investigation included four high school students with autism spectrum disorders and mild to moderate intellectual disabilities and used an…

  13. The organisation of critical care for burn patients in the UK: epidemiology and comparison of mortality prediction models.

    PubMed

    Toft-Petersen, A P; Ferrando-Vivas, P; Harrison, D A; Dunn, K; Rowan, K M

    2018-05-15

    In the UK, a network of specialist centres has been set up to provide critical care for burn patients. However, some burn patients are admitted to general intensive care units. Little is known about the casemix of these patients and how it compares with patients in specialist burn centres. It is not known whether burn-specific or generic risk prediction models perform better when applied to patients managed in intensive care units. We examined admissions for burns in the Case Mix Programme Database from April 2010 to March 2016. The casemix, activity and outcome in general and specialist burn intensive care units were compared and the fit of two burn-specific risk prediction models (revised Baux and Belgian Outcome in Burn Injury models) and one generic model (Intensive Care National Audit and Research Centre model) were compared. Patients in burn intensive care units had more extensive injuries compared with patients in general intensive care units (median (IQR [range]) burn surface area 16 (7-32 [0-98])% vs. 8 (1-18 [0-100])%, respectively) but in-hospital mortality was similar (22.8% vs. 19.0%, respectively). The discrimination and calibration of the generic Intensive Care National Audit and Research Centre model was superior to the revised Baux and Belgian Outcome in Burn Injury burn-specific models for patients managed on both specialist burn and general intensive care units. © 2018 The Association of Anaesthetists of Great Britain and Ireland.

  14. Ensemble-based evaluation for protein structure models.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2016-06-15

    Comparing protein tertiary structures is a fundamental procedure in structural biology and protein bioinformatics. Structure comparison is important particularly for evaluating computational protein structure models. Most of the model structure evaluation methods perform rigid body superimposition of a structure model to its crystal structure and measure the difference of the corresponding residue or atom positions between them. However, these methods neglect intrinsic flexibility of proteins by treating the native structure as a rigid molecule. Because different parts of proteins have different levels of flexibility, for example, exposed loop regions are usually more flexible than the core region of a protein structure, disagreement of a model to the native needs to be evaluated differently depending on the flexibility of residues in a protein. We propose a score named FlexScore for comparing protein structures that consider flexibility of each residue in the native state of proteins. Flexibility information may be extracted from experiments such as NMR or molecular dynamics simulation. FlexScore considers an ensemble of conformations of a protein described as a multivariate Gaussian distribution of atomic displacements and compares a query computational model with the ensemble. We compare FlexScore with other commonly used structure similarity scores over various examples. FlexScore agrees with experts' intuitive assessment of computational models and provides information of practical usefulness of models. https://bitbucket.org/mjamroz/flexscore dkihara@purdue.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Ensemble-based evaluation for protein structure models

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2016-01-01

    Motivation: Comparing protein tertiary structures is a fundamental procedure in structural biology and protein bioinformatics. Structure comparison is important particularly for evaluating computational protein structure models. Most of the model structure evaluation methods perform rigid body superimposition of a structure model to its crystal structure and measure the difference of the corresponding residue or atom positions between them. However, these methods neglect intrinsic flexibility of proteins by treating the native structure as a rigid molecule. Because different parts of proteins have different levels of flexibility, for example, exposed loop regions are usually more flexible than the core region of a protein structure, disagreement of a model to the native needs to be evaluated differently depending on the flexibility of residues in a protein. Results: We propose a score named FlexScore for comparing protein structures that consider flexibility of each residue in the native state of proteins. Flexibility information may be extracted from experiments such as NMR or molecular dynamics simulation. FlexScore considers an ensemble of conformations of a protein described as a multivariate Gaussian distribution of atomic displacements and compares a query computational model with the ensemble. We compare FlexScore with other commonly used structure similarity scores over various examples. FlexScore agrees with experts’ intuitive assessment of computational models and provides information of practical usefulness of models. Availability and implementation: https://bitbucket.org/mjamroz/flexscore Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307633

  16. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. High resolution (1 km) positive degree-day modelling of Greenland ice sheet surface mass balance, 1870–2012 using reanalysis data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilton, David J.; Jowett, Amy; Hanna, Edward

    Here, we show results from a positive degree-day (PDD) model of Greenland ice sheet (GrIS) surface mass balance (SMB), 1870–2012, forced with reanalysis data. The model includes an improved daily temperature parameterization as compared with a previous version and is run at 1 km rather than 5 km resolution. The improvements lead overall to higher SMB with the same forcing data. We also compare our model with results from two regional climate models (RCMs). While there is good qualitative agreement between our PDD model and the RCMs, it usually results in lower precipitation and lower runoff but approximately equivalent SMB:more » mean 1979–2012 SMB (± standard deviation), in Gt a –1, is 382 ± 78 in the PDD model, compared with 379 ± 101 and 425 ± 90 for the RCMs. Comparison with in situ SMB observations suggests that the RCMs may be more accurate than PDD at local level, in some areas, although the latter generally compares well. Dividing the GrIS into seven drainage basins we show that SMB has decreased sharply in all regions since 2000. Finally we show correlation between runoff close to two calving glaciers and either calving front retreat or calving flux, this being most noticeable from the mid-1990s.« less

  18. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  19. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  20. High resolution (1 km) positive degree-day modelling of Greenland ice sheet surface mass balance, 1870–2012 using reanalysis data

    DOE PAGES

    Wilton, David J.; Jowett, Amy; Hanna, Edward; ...

    2016-12-15

    Here, we show results from a positive degree-day (PDD) model of Greenland ice sheet (GrIS) surface mass balance (SMB), 1870–2012, forced with reanalysis data. The model includes an improved daily temperature parameterization as compared with a previous version and is run at 1 km rather than 5 km resolution. The improvements lead overall to higher SMB with the same forcing data. We also compare our model with results from two regional climate models (RCMs). While there is good qualitative agreement between our PDD model and the RCMs, it usually results in lower precipitation and lower runoff but approximately equivalent SMB:more » mean 1979–2012 SMB (± standard deviation), in Gt a –1, is 382 ± 78 in the PDD model, compared with 379 ± 101 and 425 ± 90 for the RCMs. Comparison with in situ SMB observations suggests that the RCMs may be more accurate than PDD at local level, in some areas, although the latter generally compares well. Dividing the GrIS into seven drainage basins we show that SMB has decreased sharply in all regions since 2000. Finally we show correlation between runoff close to two calving glaciers and either calving front retreat or calving flux, this being most noticeable from the mid-1990s.« less

  1. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    PubMed

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute Physiology and Chronic Health Evaluation IVa had better accuracy within patient subgroups and for specific admission diagnoses. Acute Physiology and Chronic Health Evaluation IVa offered the best discrimination and calibration on a large common dataset and excluded fewer patients than Mortality Probability Admission Model III or ICU Outcomes Model/National Quality Forum. The choice of ICU performance benchmarks should be based on a comparison of model accuracy using data for identical patients.

  2. The HEXACO Model of Personality and Risky Driving Behavior.

    PubMed

    Burtăverde, Vlad; Chraif, Mihaela; Aniţei, Mihai; Dumitru, Daniela

    2017-04-01

    This research tested the association between the HEXACO personality model and risky driving behavior as well as the predictive power of the HEXACO model in explaining risky driving behavior compared with the Big Five model. In Sample 1, 227 undergraduate students completed measures of the HEXACO personality model, the Big Five model, and driving aggression. In Sample 2, 244 community respondents completed measures of the HEXACO personality model, the Big Five model, and driving styles. Results showed that the Honesty-Humility factor is an important addition to personality models that aim to explain risky driving behavior as being related to all forms of driving aggression as well as to maladaptive and adaptive driving styles and having incremental validity in predicting verbally aggressive expression, risky driving, high-velocity driving, and careful driving. Moreover, compared with the Big Five model, the HEXACO model had better predictive power of aggressive driving.

  3. Evaluation of Veterinary Student Surgical Skills Preparation for Ovariohysterectomy Using Simulators: A Pilot Study.

    PubMed

    Read, Emma K; Vallevand, Andrea; Farrell, Robin M

    2016-01-01

    This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.

  4. On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Parisio, Francesco; Laloui, Lyesse

    2018-02-01

    The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.

  5. The cost of simplifying air travel when modeling disease spread.

    PubMed

    Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V

    2009-01-01

    Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. A. Wasiolek

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less

  7. Reynolds-Averaged Navier-Stokes Simulation of a 2D Circulation Control Wind Tunnel Experiment

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Jones, Greg; Lin, John C.

    2011-01-01

    Numerical simulations are performed using a Reynolds-averaged Navier-Stokes (RANS) flow solver for a circulation control airfoil. 2D and 3D simulation results are compared to a circulation control wind tunnel test conducted at the NASA Langley Basic Aerodynamics Research Tunnel (BART). The RANS simulations are compared to a low blowing case with a jet momentum coefficient, C(sub u), of 0:047 and a higher blowing case of 0.115. Three dimensional simulations of the model and tunnel walls show wall effects on the lift and airfoil surface pressures. These wall effects include a 4% decrease of the midspan sectional lift for the C(sub u) 0.115 blowing condition. Simulations comparing the performance of the Spalart Allmaras (SA) and Shear Stress Transport (SST) turbulence models are also made, showing the SST model compares best to the experimental data. A Rotational/Curvature Correction (RCC) to the turbulence model is also evaluated demonstrating an improvement in the CFD predictions.

  8. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy.

    PubMed

    Souza, Augusto C M; Mousaviraad, Mohammad; Mapoka, Kenneth O M; Rosentrater, Kurt A

    2018-04-24

    While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae , at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  9. Multi-indication Pharmacotherapeutic Multicriteria Decision Analytic Model for the Comparative Formulary Inclusion of Proton Pump Inhibitors in Qatar.

    PubMed

    Al-Badriyeh, Daoud; Alabbadi, Ibrahim; Fahey, Michael; Al-Khal, Abdullatif; Zaidan, Manal

    2016-05-01

    The formulary inclusion of proton pump inhibitors (PPIs) in the government hospital health services in Qatar is not comparative or restricted. Requests to include a PPI in the formulary are typically accepted if evidence of efficacy and tolerability is presented. There are no literature reports of a PPI scoring model that is based on comparatively weighted multiple indications and no reports of PPI selection in Qatar or the Middle East. This study aims to compare first-line use of the PPIs that exist in Qatar. The economic effect of the study recommendations was also quantified. A comparative, evidence-based multicriteria decision analysis (MCDA) model was constructed to follow the multiple indications and pharmacotherapeutic criteria of PPIs. Literature and an expert panel informed the selection criteria of PPIs. Input from the relevant local clinician population steered the relative weighting of selection criteria. Comparatively scored PPIs, exceeding a defined score threshold, were recommended for selection. Weighted model scores were successfully developed, with 95% CI and 5% margin of error. The model comprised 7 main criteria and 38 subcriteria. Main criteria are indication, dosage frequency, treatment duration, best published evidence, available formulations, drug interactions, and pharmacokinetic and pharmacodynamic properties. Most weight was achieved for the indications selection criteria. Esomeprazole and rabeprazole were suggested as formulary options, followed by lansoprazole for nonformulary use. The estimated effect of the study recommendations was up to a 15.3% reduction in the annual PPI expenditure. Robustness of study conclusions against variabilities in study inputs was confirmed via sensitivity analyses. The implementation of a locally developed PPI-specific comparative MCDA scoring model, which is multiweighted indication and criteria based, into the Qatari formulary selection practices is a successful evidence-based cost-cutting exercise. Esomeprazole and rabeprazole should be the first-line choice from among the PPIs available at the Qatari government hospital health services. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  10. An enhanced PM 2.5 air quality forecast model based on nonlinear regression and back-trajectory concentrations

    NASA Astrophysics Data System (ADS)

    Cobourn, W. Geoffrey

    2010-08-01

    An enhanced PM 2.5 air quality forecast model based on nonlinear regression (NLR) and back-trajectory concentrations has been developed for use in the Louisville, Kentucky metropolitan area. The PM 2.5 air quality forecast model is designed for use in the warm season, from May through September, when PM 2.5 air quality is more likely to be critical for human health. The enhanced PM 2.5 model consists of a basic NLR model, developed for use with an automated air quality forecast system, and an additional parameter based on upwind PM 2.5 concentration, called PM24. The PM24 parameter is designed to be determined manually, by synthesizing backward air trajectory and regional air quality information to compute 24-h back-trajectory concentrations. The PM24 parameter may be used by air quality forecasters to adjust the forecast provided by the automated forecast system. In this study of the 2007 and 2008 forecast seasons, the enhanced model performed well using forecasted meteorological data and PM24 as input. The enhanced PM 2.5 model was compared with three alternative models, including the basic NLR model, the basic NLR model with a persistence parameter added, and the NLR model with persistence and PM24. The two models that included PM24 were of comparable accuracy. The two models incorporating back-trajectory concentrations had lower mean absolute errors and higher rates of detecting unhealthy PM2.5 concentrations compared to the other models.

  11. Investigation of Intravenous Hydroxocobalamin Compared to Hextend for Resuscitation in a Swine Model of Uncontrolled Hemorrhagic Shock: A Preliminary Report

    DTIC Science & Technology

    2017-08-27

    TYPE 0812712017 Poster 4. TITLE AND SUBTITLE Investigation of intravenous hydroxocobalamin compared to Hcxtcnd for resuscitation in a S\\vinc...Praleos1onal 7,.0 ApprdvtJ 𔃻Jr ,~1~, re.l14’t. Oi*i~,J;D" i\\ vŕ:~1wJ. Investigation of intravenous Hydroxocobalamin compared to Hextend® for resu...effective as IV Hextend® in improving systolic blood pressure (SBP) in a controlled hemorrhagic shock model. We aimed to compare IV hydroxocobalamin (HOC

  12. Numerical Modeling of Pulsed Electrical Discharges for High-Speed Flow Control

    DTIC Science & Technology

    2012-02-01

    dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift-diffusion with...two dimensions , and later on more complex problems. Subsequent work compared different physical models for pulsed discharges: one-moment (drift...The state of a particle can be specified by its position and velocity. In principal, the motion of a large group of particles can be predicted from

  13. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    DTIC Science & Technology

    2016-04-23

    SECURITY CLASSIFICATION OF: The explosive growth of Internet Media (partial-duplicate/similar images, 3D objects, 3D models, etc.) sheds bright...light on many promising applications in forensics, surveillance, 3D animation, mobile visual search, and 3D model/object search. Compared with the...and stable spatial configuration. Compared with the general 2D objects, 3D models/objects consist of 3D data information (typically a list of

  14. Comparative Statistical Analysis of Auroral Models

    DTIC Science & Technology

    2012-03-22

    was willing to add this project to her extremely busy schedule. Lastly, I must also express my sincere appreciation for the rest of the faculty and...models have been extensively used for estimating GPS and other communication satellite disturbances ( Newell et al., 2010a). The auroral oval...models predict changes in the auroral oval in response to various geomagnetic conditions. In 2010, Newell et al. conducted a comparative study of

  15. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  16. A COMPARISON OF PEER VIDEO MODELING AND SELF VIDEO MODELING TO TEACH TEXTUAL RESPONSES IN CHILDREN WITH AUTISM

    PubMed Central

    Marcus, Alonna; Wilder, David A

    2009-01-01

    Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism. PMID:19949521

  17. Forecasting of primary energy consumption data in the United States: A comparison between ARIMA and Holter-Winters models

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Ahmar, A. S.

    2017-09-01

    This research has a purpose to compare ARIMA Model and Holt-Winters Model based on MAE, RSS, MSE, and RMS criteria in predicting Primary Energy Consumption Total data in the US. The data from this research ranges from January 1973 to December 2016. This data will be processed by using R Software. Based on the results of data analysis that has been done, it is found that the model of Holt-Winters Additive type (MSE: 258350.1) is the most appropriate model in predicting Primary Energy Consumption Total data in the US. This model is more appropriate when compared with Holt-Winters Multiplicative type (MSE: 262260,4) and ARIMA Seasonal model (MSE: 723502,2).

  18. Computer discrimination procedures applicable to aerial and ERTS multispectral data

    NASA Technical Reports Server (NTRS)

    Richardson, A. J.; Torline, R. J.; Allen, W. A.

    1970-01-01

    Two statistical models are compared in the classification of crops recorded on color aerial photographs. A theory of error ellipses is applied to the pattern recognition problem. An elliptical boundary condition classification model (EBC), useful for recognition of candidate patterns, evolves out of error ellipse theory. The EBC model is compared with the minimum distance to the mean (MDM) classification model in terms of pattern recognition ability. The pattern recognition results of both models are interpreted graphically using scatter diagrams to represent measurement space. Measurement space, for this report, is determined by optical density measurements collected from Kodak Ektachrome Infrared Aero Film 8443 (EIR). The EBC model is shown to be a significant improvement over the MDM model.

  19. Verification and transfer of thermal pollution model. Volume 6: User's manual for 1-dimensional numerical model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.

  20. Customization of a generic 3D model of the distal femur using diagnostic radiographs.

    PubMed

    Schmutz, B; Reynolds, K J; Slavotinek, J P

    2008-01-01

    A method for the customization of a generic 3D model of the distal femur is presented. The customization method involves two steps: acquisition of calibrated orthogonal planar radiographs; and linear scaling of the generic model based on the width of a subject's femoral condyles as measured on the planar radiographs. Planar radiographs of seven intact lower cadaver limbs were obtained. The customized generic models were validated by comparing their surface geometry with that of CT-reconstructed reference models. The overall mean error was 1.2 mm. The results demonstrate that uniform scaling as a first step in the customization process produced a base model of accuracy comparable to other models reported in the literature.

  1. BSM2 Plant-Wide Model construction and comparative analysis with other methodologies for integrated modelling.

    PubMed

    Grau, P; Vanrolleghem, P; Ayesa, E

    2007-01-01

    In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.

  2. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  3. Verification and transfer of thermal pollution model. Volume 2: User's manual for 3-dimensional free-surface model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.

    1982-01-01

    The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  4. Comparing the Effects of Particulate Matter on the Ocular Surfaces of Normal Eyes and a Dry Eye Rat Model.

    PubMed

    Han, Ji Yun; Kang, Boram; Eom, Youngsub; Kim, Hyo Myung; Song, Jong Suk

    2017-05-01

    To compare the effect of exposure to particulate matter on the ocular surface of normal and experimental dry eye (EDE) rat models. Titanium dioxide (TiO2) nanoparticles were used as the particulate matter. Rats were divided into 4 groups: normal control group, TiO2 challenge group of the normal model, EDE control group, and TiO2 challenge group of the EDE model. After 24 hours, corneal clarity was compared and tear samples were collected for quantification of lactate dehydrogenase, MUC5AC, and tumor necrosis factor-α concentrations. The periorbital tissues were used to evaluate the inflammatory cell infiltration and detect apoptotic cells. The corneal clarity score was greater in the EDE model than in the normal model. The score increased after TiO2 challenge in each group compared with each control group (normal control vs. TiO2 challenge group, 0.0 ± 0.0 vs. 0.8 ± 0.6, P = 0.024; EDE control vs. TiO2 challenge group, 2.2 ± 0.6 vs. 3.8 ± 0.4, P = 0.026). The tear lactate dehydrogenase level and inflammatory cell infiltration on the ocular surface were higher in the EDE model than in the normal model. These measurements increased significantly in both normal and EDE models after TiO2 challenge. The tumor necrosis factor-α levels and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling-positive cells were also higher in the EDE model than in the normal model. TiO2 nanoparticle exposure on the ocular surface had a more prominent effect in the EDE model than it did in the normal model. The ocular surface of dry eyes seems to be more vulnerable to fine dust of air pollution than that of normal eyes.

  5. Modeling daily discharge responses of a large karstic aquifer using soft computing methods: Artificial neural network and neuro-fuzzy

    NASA Astrophysics Data System (ADS)

    Kurtulus, Bedri; Razack, Moumtaz

    2010-02-01

    SummaryThis paper compares two methods for modeling karst aquifers, which are heterogeneous, highly non-linear, and hierarchical systems. There is a clear need to model these systems given the crucial role they play in water supply in many countries. In recent years, the main components of soft computing (fuzzy logic (FL), and Artificial Neural Networks, (ANNs)) have come to prevail in the modeling of complex non-linear systems in different scientific and technologic disciplines. In this study, Artificial Neural Networks and Adaptive Neuro-Fuzzy Interface System (ANFIS) methods were used for the prediction of daily discharge of karstic aquifers and their capability was compared. The approach was applied to 7 years of daily data of La Rochefoucauld karst system in south-western France. In order to predict the karst daily discharges, single-input (rainfall, piezometric level) vs. multiple-input (rainfall and piezometric level) series were used. In addition to these inputs, all models used measured or simulated discharges from the previous days with a specified delay. The models were designed in a Matlab™ environment. An automatic procedure was used to select the best calibrated models. Daily discharge predictions were then performed using the calibrated models. Comparing predicted and observed hydrographs indicates that both models (ANN and ANFIS) provide close predictions of the karst daily discharges. The summary statistics of both series (observed and predicted daily discharges) are comparable. The performance of both models is improved when the number of inputs is increased from one to two. The root mean square error between the observed and predicted series reaches a minimum for two-input models. However, the ANFIS model demonstrates a better performance than the ANN model to predict peak flow. The ANFIS approach demonstrates a better generalization capability and slightly higher performance than the ANN, especially for peak discharges.

  6. Comparative studies on structures, mechanical properties, sensitivity, stabilities and detonation performance of CL-20/TNT cocrystal and composite explosives by molecular dynamics simulation.

    PubMed

    Hang, Gui-Yun; Yu, Wen-Li; Wang, Tao; Wang, Jin-Tao; Li, Zhen

    2017-09-19

    To investigate and compare the differences of structures and properties of CL-20/TNT cocrystal and composite explosives, the CL-20/TNT cocrystal and composite models were established. Molecular dynamics simulations were performed to investigate the structures, mechanical properties, sensitivity, stabilities and detonation performance of cocrystal and composite models with COMPASS force field in NPT ensemble. The lattice parameters, mechanical properties, binding energies, interaction energy of trigger bond, cohesive energy density and detonation parameters were determined and compared. The results show that, compared with pure CL-20, the rigidity and stiffness of cocrystal and composite models decreased, while plastic properties and ductility increased, so mechanical properties can be effectively improved by adding TNT into CL-20 and the cocrystal model has better mechanical properties. The interaction energy of the trigger bond and the cohesive energy density is in the order of CL-20/TNT cocrystal > CL-20/TNT composite > pure CL-20, i.e., cocrystal model is less sensitive than CL-20 and the composite model, and has the best safety parameters. Binding energies show that the cocrystal model has higher intermolecular interaction energy values than the composite model, thus illustrating the better stability of the cocrystal model. Detonation parameters vary as CL-20 > cocrystal > composite, namely, the energy density and power of cocrystal and composite model are weakened; however, the CL-20/TNT cocrystal explosive still has desirable energy density and detonation performance. This results presented in this paper help offer some helpful guidance to better understand the mechanism of CL-20/TNT cocrystal explosives and provide some theoretical assistance for cocrystal explosive design.

  7. Comparison of the occlusal contact area of virtual models and actual models: a comparative in vitro study on Class I and Class II malocclusion models.

    PubMed

    Lee, Hyemin; Cha, Jooly; Chun, Youn-Sic; Kim, Minji

    2018-06-19

    The occlusal registration of virtual models taken by intraoral scanners sometimes shows patterns which seem much different from the patients' occlusion. Therefore, this study aims to evaluate the accuracy of virtual occlusion by comparing virtual occlusal contact area with actual occlusal contact area using a plaster model in vitro. Plaster dental models, 24 sets of Class I models and 20 sets of Class II models, were divided into a Molar, Premolar, and Anterior group. The occlusal contact areas calculated by the Prescale method and the virtual occlusion by scanning method were compared, and the ratio of the molar and incisor area were compared in order to find any particular tendencies. There was no significant difference between the Prescale results and the scanner results in both the molar and premolar groups (p = 0.083 and 0.053, respectively). On the other hand, there was a significant difference between the Prescale and the scanner results in the anterior group with the scanner results presenting overestimation of the occlusal contact points (p < 0.05). In Molars group, the regression analysis shows that the two variables express linear correlation and has a linear equation with a slope of 0.917. R 2 is 0.930. Groups of Premolars and Anteriors had a week linear relationship and greater dispersion. Difference between the actual and virtual occlusion revealed in the anterior portion, where overestimation was observed in the virtual model obtained from the scanning method. Nevertheless, molar and premolar areas showed relatively accurate occlusal contact area in the virtual model.

  8. Numerical modeling of carrier gas flow in atomic layer deposition vacuum reactor: A comparative study of lattice Boltzmann models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Dongqing; Chien Jen, Tien; Li, Tao

    2014-01-15

    This paper characterizes the carrier gas flow in the atomic layer deposition (ALD) vacuum reactor by introducing Lattice Boltzmann Method (LBM) to the ALD simulation through a comparative study of two LBM models. Numerical models of gas flow are constructed and implemented in two-dimensional geometry based on lattice Bhatnagar–Gross–Krook (LBGK)-D2Q9 model and two-relaxation-time (TRT) model. Both incompressible and compressible scenarios are simulated and the two models are compared in the aspects of flow features, stability, and efficiency. Our simulation outcome reveals that, for our specific ALD vacuum reactor, TRT model generates better steady laminar flow features all over the domainmore » with better stability and reliability than LBGK-D2Q9 model especially when considering the compressible effects of the gas flow. The LBM-TRT is verified indirectly by comparing the numerical result with conventional continuum-based computational fluid dynamics solvers, and it shows very good agreement with these conventional methods. The velocity field of carrier gas flow through ALD vacuum reactor was characterized by LBM-TRT model finally. The flow in ALD is in a laminar steady state with velocity concentrated at the corners and around the wafer. The effects of flow fields on precursor distributions, surface absorptions, and surface reactions are discussed in detail. Steady and evenly distributed velocity field contribute to higher precursor concentration near the wafer and relatively lower particle velocities help to achieve better surface adsorption and deposition. The ALD reactor geometry needs to be considered carefully if a steady and laminar flow field around the wafer and better surface deposition are desired.« less

  9. A comparative evaluation of models to predict human intestinal metabolism from nonclinical data

    PubMed Central

    Yau, Estelle; Petersson, Carl; Dolgos, Hugues

    2017-01-01

    Abstract Extensive gut metabolism is often associated with the risk of low and variable bioavailability. The prediction of the fraction of drug escaping gut wall metabolism as well as transporter‐mediated secretion (F g) has been challenged by the lack of appropriate preclinical models. The purpose of this study is to compare the performance of models that are widely employed in the pharmaceutical industry today to estimate F g and, based on the outcome, to provide recommendations for the prediction of human F g during drug discovery and early drug development. The use of in vitro intrinsic clearance from human liver microsomes (HLM) in three mechanistic models – the ADAM, Q gut and Competing Rates – was evaluated for drugs whose metabolism is dominated by CYP450s, assuming that the effect of transporters is negligible. The utility of rat as a model for human F g was also explored. The ADAM, Q gut and Competing Rates models had comparable prediction success (70%, 74%, 69%, respectively) and bias (AFE = 1.26, 0.74 and 0.81, respectively). However, the ADAM model showed better accuracy compared with the Q gut and Competing Rates models (RMSE =0.20 vs 0.30 and 0.25, respectively). Rat is not a good model (prediction success =32%, RMSE =0.48 and AFE = 0.44) as it seems systematically to under‐predict human F g. Hence, we would recommend the use of rat to identify the need for F g assessment, followed by the use of HLM in simple models to predict human F g. © 2017 Merck KGaA. Biopharmaceutics & Drug Disposition Published by John Wiley & Sons, Ltd. PMID:28152562

  10. A comparative evaluation of models to predict human intestinal metabolism from nonclinical data.

    PubMed

    Yau, Estelle; Petersson, Carl; Dolgos, Hugues; Peters, Sheila Annie

    2017-04-01

    Extensive gut metabolism is often associated with the risk of low and variable bioavailability. The prediction of the fraction of drug escaping gut wall metabolism as well as transporter-mediated secretion (F g ) has been challenged by the lack of appropriate preclinical models. The purpose of this study is to compare the performance of models that are widely employed in the pharmaceutical industry today to estimate F g and, based on the outcome, to provide recommendations for the prediction of human F g during drug discovery and early drug development. The use of in vitro intrinsic clearance from human liver microsomes (HLM) in three mechanistic models - the ADAM, Q gut and Competing Rates - was evaluated for drugs whose metabolism is dominated by CYP450s, assuming that the effect of transporters is negligible. The utility of rat as a model for human F g was also explored. The ADAM, Q gut and Competing Rates models had comparable prediction success (70%, 74%, 69%, respectively) and bias (AFE = 1.26, 0.74 and 0.81, respectively). However, the ADAM model showed better accuracy compared with the Q gut and Competing Rates models (RMSE =0.20 vs 0.30 and 0.25, respectively). Rat is not a good model (prediction success =32%, RMSE =0.48 and AFE = 0.44) as it seems systematically to under-predict human F g . Hence, we would recommend the use of rat to identify the need for F g assessment, followed by the use of HLM in simple models to predict human F g . © 2017 Merck KGaA. Biopharmaceutics & Drug Disposition Published by John Wiley & Sons, Ltd. © 2017 Merck KGaA. Biopharmaceutics & Drug Disposition Published by John Wiley & Sons, Ltd.

  11. Brain Injury Differences in Frontal Impact Crash Using Different Simulation Strategies

    PubMed Central

    Ma, Chunsheng; Shen, Ming; Li, Peiyu; Zhang, Jinhuan

    2015-01-01

    In the real world crashes, brain injury is one of the leading causes of deaths. Using isolated human head finite element (FE) model to study the brain injury patterns and metrics has been a simplified methodology widely adopted, since it costs significantly lower computation resources than a whole human body model does. However, the degree of precision of this simplification remains questionable. This study compared these two kinds of methods: (1) using a whole human body model carried on the sled model and (2) using an isolated head model with prescribed head motions, to study the brain injury. The distribution of the von Mises stress (VMS), maximum principal strain (MPS), and cumulative strain damage measure (CSDM) was used to compare the two methods. The results showed that the VMS of brain mainly concentrated at the lower cerebrum and occipitotemporal region close to the cerebellum. The isolated head modelling strategy predicted higher levels of MPS and CSDM 5%, while the difference is small in CSDM 10% comparison. It suggests that isolated head model may not equivalently reflect the strain levels below the 10% compared to the whole human body model. PMID:26495029

  12. Assessment of prediction skill in equatorial Pacific Ocean in high resolution model of CFS

    NASA Astrophysics Data System (ADS)

    Arora, Anika; Rao, Suryachandra A.; Pillai, Prasanth; Dhakate, Ashish; Salunke, Kiran; Srivastava, Ankur

    2018-01-01

    The effect of increasing atmospheric resolution on prediction skill of El Niño southern oscillation phenomenon in climate forecast system model is explored in this paper. Improvement in prediction skill for sea surface temperature (SST) and winds at all leads compared to low resolution model in the tropical Indo-Pacific basin is observed. High resolution model is able to capture extreme events reasonably well. As a result, the signal to noise ratio is improved in the high resolution model. However, spring predictability barrier (SPB) for summer months in Nino 3 and Nino 3.4 region is stronger in high resolution model, in spite of improvement in overall prediction skill and dynamics everywhere else. Anomaly correlation coefficient of SST in high resolution model with observations in Nino 3.4 region targeting boreal summer months when predicted at lead times of 3-8 months in advance decreased compared its lower resolution counterpart. It is noted that higher variance of winds predicted in spring season over central equatorial Pacific compared to observed variance of winds results in stronger than normal response on subsurface ocean, hence increases SPB for boreal summer months in high resolution model.

  13. Modeling and Control for Microgrids

    NASA Astrophysics Data System (ADS)

    Steenis, Joel

    Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.

  14. Mortality Predicted Accuracy for Hepatocellular Carcinoma Patients with Hepatic Resection Using Artificial Neural Network

    PubMed Central

    Chiu, Herng-Chia; Ho, Te-Wei; Lee, King-Teh; Chen, Hong-Yaw; Ho, Wen-Hsien

    2013-01-01

    The aim of this present study is firstly to compare significant predictors of mortality for hepatocellular carcinoma (HCC) patients undergoing resection between artificial neural network (ANN) and logistic regression (LR) models and secondly to evaluate the predictive accuracy of ANN and LR in different survival year estimation models. We constructed a prognostic model for 434 patients with 21 potential input variables by Cox regression model. Model performance was measured by numbers of significant predictors and predictive accuracy. The results indicated that ANN had double to triple numbers of significant predictors at 1-, 3-, and 5-year survival models as compared with LR models. Scores of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of 1-, 3-, and 5-year survival estimation models using ANN were superior to those of LR in all the training sets and most of the validation sets. The study demonstrated that ANN not only had a great number of predictors of mortality variables but also provided accurate prediction, as compared with conventional methods. It is suggested that physicians consider using data mining methods as supplemental tools for clinical decision-making and prognostic evaluation. PMID:23737707

  15. On the Space-Time Structure of Sheared Turbulence

    NASA Astrophysics Data System (ADS)

    de Maré, Martin; Mann, Jakob

    2016-09-01

    We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according to rapid distortion theory, and Kristensen's longitudinal coherence model, in which eddies are simultaneously advected by larger eddies as well as decaying. The model is compared with data from both observations and large-eddy simulations and is found to predict spatial correlations comparable to the Mann spectral tensor and temporal coherence better than any known model. Within the developed framework, Lagrangian two-point correlations in space and time are also predicted, and the predictions are compared with measurements of isotropic turbulence. The required input to the models, which are formulated as spectral velocity tensors, can be estimated from measured spectra or be derived from the rate of dissipation of turbulent kinetic energy, the friction velocity and the mean shear of the flow. The developed models can, for example, be used in wind-turbine engineering, in applications such as lidar-assisted feed forward control and wind-turbine wake modelling.

  16. A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca

    2014-02-15

    Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less

  17. An Operational Model for the Prediction of Jet Blast

    DOT National Transportation Integrated Search

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  18. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  19. Comparison of Einstein-Boltzmann solvers for testing general relativity

    NASA Astrophysics Data System (ADS)

    Bellini, E.; Barreira, A.; Frusciante, N.; Hu, B.; Peirone, S.; Raveri, M.; Zumalacárregui, M.; Avilez-Lopez, A.; Ballardini, M.; Battye, R. A.; Bolliet, B.; Calabrese, E.; Dirian, Y.; Ferreira, P. G.; Finelli, F.; Huang, Z.; Ivanov, M. M.; Lesgourgues, J.; Li, B.; Lima, N. A.; Pace, F.; Paoletti, D.; Sawicki, I.; Silvestri, A.; Skordis, C.; Umiltà, C.; Vernizzi, F.

    2018-01-01

    We compare Einstein-Boltzmann solvers that include modifications to general relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity, a code that models f (R ) gravity, a code that models covariant Galileons, a code that models Hořava-Lifschitz gravity, and two codes that model nonlocal models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the subpercent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.

  20. Characteristics of tropical cyclones in high-resolution models in the present climate

    DOE PAGES

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; ...

    2014-12-05

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TCmore » frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.« less

  1. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  2. Fatigue crack growth with single overload - Measurement and modeling

    NASA Technical Reports Server (NTRS)

    Davidson, D. L.; Hudak, S. J., Jr.; Dexter, R. J.

    1987-01-01

    This paper compares experiments with an analytical model of fatigue crack growth under variable amplitude. The stereoimaging technique was used to measure displacements near the tips of fatigue cracks undergoing simple variations in load amplitude-single overloads and overload/underload combinations. Measured displacements were used to compute strains, and stresses were determined from the strains. Local values of crack driving force (Delta-K effective) were determined using both locally measured opening loads and crack tip opening displacements. Experimental results were compared with simulations made for the same load variation conditions using Newman's FAST-2 model. Residual stresses caused by overloads, crack opening loads, and growth retardation periods were compared.

  3. Orion Active Thermal Control System Dynamic Modeling Using Simulink/MATLAB

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Yuko, James

    2010-01-01

    This paper presents dynamic modeling of the crew exploration vehicle (Orion) active thermal control system (ATCS) using Simulink (Simulink, developed by The MathWorks). The model includes major components in ATCS, such as heat exchangers and radiator panels. The mathematical models of the heat exchanger and radiator are described first. Four different orbits were used to validate the radiator model. The current model results were compared with an independent Thermal Desktop (TD) (Thermal Desktop, PC/CAD-based thermal model builder, developed in Cullimore & Ring (C&R) Technologies) model results and showed good agreement for all orbits. In addition, the Orion ATCS performance was presented for three orbits and the current model results were compared with three sets of solutions- FloCAD (FloCAD, PC/CAD-based thermal/fluid model builder, developed in C&R Technologies) model results, SINDA/FLUINT (SINDA/FLUINT, a generalized thermal/fluid network-style solver ) model results, and independent Simulink model results. For each case, the fluid temperatures at every component on both the crew module and service module sides were plotted and compared. The overall agreement is reasonable for all orbits, with similar behavior and trends for the system. Some discrepancies exist because the control algorithm might vary from model to model. Finally, the ATCS performance for a 45-hr nominal mission timeline was simulated to demonstrate the capability of the model. The results show that the ATCS performs as expected and approximately 2.3 lb water was consumed in the sublimator within the 45 hr timeline before Orion docked at the International Space Station.

  4. A Markov Environment-dependent Hurricane Intensity Model and Its Comparison with Multiple Dynamic Models

    NASA Astrophysics Data System (ADS)

    Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.

    2017-12-01

    A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.

  5. Nutrient intakes among children and adolescents eating usual pizza products in school lunch compared with pizza meeting HealthierUS School Challenge criteria.

    PubMed

    Hur, In Young; Marquart, Len; Reicks, Marla

    2014-05-01

    Pizza is a popular food that can contribute to high intakes of saturated fat and sodium among children and adolescents. The objective of this study was to compare daily nutrient intakes when a pizza product meeting the US Department of Agriculture's criteria for competitive food entrées under the HealthierUS School Challenge (HUSSC) was substituted for usual pizza products consumed during foodservice-prepared school lunch. The study used National Health and Nutrition Examination Survey (2005-2008) dietary recall data from a cross-sectional sample of US children and adolescents (age 5 to 18 years, n=337) who ate pizza during school lunch on 1 day of dietary recall. Daily nutrient intakes based on the consumption of usual pizza products for school lunch (pre-modeled) were compared with intakes modeled by substituting nutrient values from an HUSSC whole-grain pizza product (post-modeled). Paired t tests were used to make the comparison. Post-modeled intakes were lower in daily energy, carbohydrate, total fat, saturated fat, cholesterol, and sodium compared with pre-modeled intakes among children and adolescents (P<0.01). Protein, dietary fiber, vitamin A, and potassium intakes were higher in the post-modeled intake condition compared with the pre-modeled condition (P<0.01). Substituting the healthier pizza product for usual pizza products may significantly improve dietary quality of children and adolescents eating pizza for school lunch, indicating that it could be an effective approach to improve the nutritional quality of school lunch programs. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  6. Assessment of orthodontic treatment need: a comparison of study models and facial photographs.

    PubMed

    Sherlock, Joseph M; Cobourne, Martyn T; McDonald, Fraser

    2008-02-01

    The current study aims to examine how orthodontic treatment need is prioritized depending upon whether dental study models or facial photographs are used as the means of assessment. A group of three orthodontists and three postgraduate orthodontic students assessed: (i) dental attractiveness; and (ii) need for orthodontic treatment in 40 subjects (19 males, 21 females). The 40 subjects displayed a range of malocclusions. Separate assessments were made from study models and facial photographs. There was a bias towards higher scores for dental attractiveness from facial photographs compared with assessment of study casts, for all examiners. This was statistically significant for five of the six examiners (P = 0.001-0.101). The need for orthodontic treatment was rated as 20% higher from study models compared with facial photographs (P < 0.001); overall the level of need for orthodontic treatment was rated as 18.9% higher from study models compared with facial photographs (P < 0.001). Reproducibility analyses showed that there was a considerable variation in the intra- and inter-examiner agreement. This study shows that a group of three orthodontists and three postgraduate students in orthodontics: (i) rated orthodontic treatment need higher from study models compared with facial photographs and; (ii) rated dental attractiveness higher from facial photographs compared with study models. It is suggested that the variable intra-examiner agreement may result from the assessment of orthodontic treatment need and dental attractiveness in the absence of any specific assessment criteria. The poor reproducibility of assessment of orthodontic treatment need and dental attractiveness in the absence of strict criteria may suggest the need to use an appropriate index.

  7. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  8. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  9. Sensitivity of Chemical Shift-Encoded Fat Quantification to Calibration of Fat MR Spectrum

    PubMed Central

    Wang, Xiaoke; Hernando, Diego; Reeder, Scott B.

    2015-01-01

    Purpose To evaluate the impact of different fat spectral models on proton density fat-fraction (PDFF) quantification using chemical shift-encoded (CSE) MRI. Material and Methods Simulations and in vivo imaging were performed. In a simulation study, spectral models of fat were compared pairwise. Comparison of magnitude fitting and mixed fitting was performed over a range of echo times and fat fractions. In vivo acquisitions from 41 patients were reconstructed using 7 published spectral models of fat. T2-corrected STEAM-MRS was used as reference. Results Simulations demonstrate that imperfectly calibrated spectral models of fat result in biases that depend on echo times and fat fraction. Mixed fitting is more robust against this bias than magnitude fitting. Multi-peak spectral models showed much smaller differences among themselves than when compared to the single-peak spectral model. In vivo studies show all multi-peak models agree better (for mixed fitting, slope ranged from 0.967–1.045 using linear regression) with reference standard than the single-peak model (for mixed fitting, slope=0.76). Conclusion It is essential to use a multi-peak fat model for accurate quantification of fat with CSE-MRI. Further, fat quantification techniques using multi-peak fat models are comparable and no specific choice of spectral model is shown to be superior to the rest. PMID:25845713

  10. Using ‘particle in a box’ models to calculate energy levels in semiconductor quantum well structures

    NASA Astrophysics Data System (ADS)

    Ebbens, A. T.

    2018-07-01

    Although infinite potential ‘particle in a box’ models are widely used to introduce quantised energy levels their predictions cannot be quantitatively compared with atomic emission spectra. Here, this problem is overcome by describing how both infinite and finite potential well models can be used to calculate the confined energy levels of semiconductor quantum wells. This is done by using physics and mathematics concepts that are accessible to pre-university students. The results of the models are compared with experimental data and their accuracy discussed.

  11. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  12. A cross-sectional study of 329 farms in England to identify risk factors for ovine clinical mastitis

    PubMed Central

    Cooper, S.; Huntley, S.J.; Crump, R.; Lovatt, F.; Green, L.E.

    2016-01-01

    The aims of this study were to estimate the incidence rate of clinical mastitis (IRCM) and identify risk factors for clinical mastitis in suckler ewes to generate hypotheses for future study. A postal questionnaire was sent to 999 randomly selected English sheep farmers in 2010 to gather data on farmer reported IRCM and flock management practices for the calendar year 2009, of which 329 provided usable information. The mean IRCM per flock was 1.2/100 ewes/year (CI:1.10:1.35). The IRCM was 2.0, 0.9 and 1.3/100 ewes/year for flocks that lambed indoors, outdoors and a combination of both, respectively. Farmers ran a variety of managements before, during and after lambing that were not comparable within one model, therefore six mixed effects over-dispersed Poisson regression models were developed. Factors significantly associated with increased IRCM were increasing percentage of the flock with poor udder conformation, increasing mean number of lambs reared/ewe and when some or all ewes lambed in barns compared with outdoors (Model 1). For ewes housed in barns before lambing (Model 2), concrete, earth and other materials were associated with an increase in IRCM compared with hardcore floors (an aggregate of broken bricks and stones). For ewes in barns during lambing (Model 3), an increase in IRCM was associated with concrete compared with hardcore flooring and where bedding was stored covered outdoors or in a building compared with bedding stored outdoors uncovered. For ewes in barns after lambing (Model 4), increased IRCM was associated with earth compared with hardcore floors, and when fresh bedding was added once per week compared with at a frequency of ≤2 days or twice/week. The IRCM was lower for flocks where some or all ewes remained in the same fields before, during and after lambing compared with flocks that did not (Model 5). Where ewes and lambs were turned outdoors after lambing (Model 6), the IRCM increased as the age of the oldest lambs at turnout increased. We conclude that the reported IRCM is low but highly variable and that the complexity of management of sheep around lambing limits the insight into generating hypotheses at flock level for risks for clinical mastitis across the whole industry. Whilst indoor production was generally associated with an increased IRCM, for ewes with large litter size indoor lambing was protective, we hypothesise that this is possibly because of better nutrition or reduced exposure to poor weather and factors associated with hygiene. PMID:26809634

  13. Contact analysis and experimental investigation of a linear ultrasonic motor.

    PubMed

    Lv, Qibao; Yao, Zhiyuan; Li, Xiang

    2017-11-01

    The effects of surface roughness are not considered in the traditional motor model which fails to reflect the actual contact mechanism between the stator and slider. An analytical model for calculating the tangential force of linear ultrasonic motor is proposed in this article. The presented model differs from the previous spring contact model, the asperities in contact between stator and slider are considered. The influences of preload and exciting voltage on tangential force in moving direction are analyzed. An experiment is performed to verify the feasibility of this proposed model by comparing the simulation results with the measured data. Moreover, the proposed model and spring model are compared. The results reveal that the proposed model is more accurate than spring model. The discussion is helpful for designing and modeling of linear ultrasonic motors. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  15. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  16. Reranking candidate gene models with cross-species comparison for improved gene prediction

    PubMed Central

    Liu, Qian; Crammer, Koby; Pereira, Fernando CN; Roos, David S

    2008-01-01

    Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc). Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models. PMID:18854050

  17. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  18. Reconciling paleodistribution models and comparative phylogeography in the Wet Tropics rainforest land snail Gnarosophia bellendenkerensis (Brazier 1875).

    PubMed

    Hugall, Andrew; Moritz, Craig; Moussalli, Adnan; Stanisic, John

    2002-04-30

    Comparative phylogeography has proved useful for investigating biological responses to past climate change and is strongest when combined with extrinsic hypotheses derived from the fossil record or geology. However, the rarity of species with sufficient, spatially explicit fossil evidence restricts the application of this method. Here, we develop an alternative approach in which spatial models of predicted species distributions under serial paleoclimates are compared with a molecular phylogeography, in this case for a snail endemic to the rainforests of North Queensland, Australia. We also compare the phylogeography of the snail to those from several endemic vertebrates and use consilience across all of these approaches to enhance biogeographical inference for this rainforest fauna. The snail mtDNA phylogeography is consistent with predictions from paleoclimate modeling in relation to the location and size of climatic refugia through the late Pleistocene-Holocene and broad patterns of extinction and recolonization. There is general agreement between quantitative estimates of population expansion from sequence data (using likelihood and coalescent methods) vs. distributional modeling. The snail phylogeography represents a composite of both common and idiosyncratic patterns seen among vertebrates, reflecting the geographically finer scale of persistence and subdivision in the snail. In general, this multifaceted approach, combining spatially explicit paleoclimatological models and comparative phylogeography, provides a powerful approach to locating historical refugia and understanding species' responses to them.

  19. Reconciling paleodistribution models and comparative phylogeography in the Wet Tropics rainforest land snail Gnarosophia bellendenkerensis (Brazier 1875)

    PubMed Central

    Hugall, Andrew; Moritz, Craig; Moussalli, Adnan; Stanisic, John

    2002-01-01

    Comparative phylogeography has proved useful for investigating biological responses to past climate change and is strongest when combined with extrinsic hypotheses derived from the fossil record or geology. However, the rarity of species with sufficient, spatially explicit fossil evidence restricts the application of this method. Here, we develop an alternative approach in which spatial models of predicted species distributions under serial paleoclimates are compared with a molecular phylogeography, in this case for a snail endemic to the rainforests of North Queensland, Australia. We also compare the phylogeography of the snail to those from several endemic vertebrates and use consilience across all of these approaches to enhance biogeographical inference for this rainforest fauna. The snail mtDNA phylogeography is consistent with predictions from paleoclimate modeling in relation to the location and size of climatic refugia through the late Pleistocene-Holocene and broad patterns of extinction and recolonization. There is general agreement between quantitative estimates of population expansion from sequence data (using likelihood and coalescent methods) vs. distributional modeling. The snail phylogeography represents a composite of both common and idiosyncratic patterns seen among vertebrates, reflecting the geographically finer scale of persistence and subdivision in the snail. In general, this multifaceted approach, combining spatially explicit paleoclimatological models and comparative phylogeography, provides a powerful approach to locating historical refugia and understanding species' responses to them. PMID:11972064

  20. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  1. Creep Damage Analysis of a Lattice Truss Panel Structure

    NASA Astrophysics Data System (ADS)

    Jiang, Wenchun; Li, Shaohua; Luo, Yun; Xu, Shugen

    2017-01-01

    The creep failure for a lattice truss sandwich panel structure has been predicted by finite element method (FEM). The creep damage is calculated by three kinds of stresses: as-brazed residual stress, operating thermal stress and mechanical load. The creep damage at tensile and compressive loads have been calculated and compared. The creep rate calculated by FEM, Gibson-Ashby and Hodge-Dunand models have been compared. The results show that the creep failure is located at the fillet at both tensile and creep loads. The damage rate at the fillet at tensile load is 50 times as much as that at compressive load. The lattice truss panel structure has a better creep resistance to compressive load than tensile load, because the creep and stress triaxiality at the fillet has been decreased at compressive load. The maximum creep strain at the fillet and the equivalent creep strain of the panel structure increase with the increase of applied load. Compared with Gibson-Ashby model and Hodge-Dunand models, the modified Gibson-Ashby model has a good prediction result compared with FEM. However, a more accurate model considering the size effect of the structure still needs to be developed.

  2. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix.

    PubMed

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing

    2017-01-01

    Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.

  3. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix

    PubMed Central

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy

    2017-01-01

    Abstract Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4–7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map–model pairs. PMID:27936925

  4. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  5. Circulation and rainfall climatology of a 10-year (1979 - 1988) integration with the Goddard Laboratory for atmospheres general circulation model

    NASA Technical Reports Server (NTRS)

    Kim, J.-H.; Sud, Y. C.

    1993-01-01

    A 10-year (1979-1988) integration of Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) under Atmospheric Model Intercomparison Project (AMIP) is analyzed and compared with observation. The first momentum fields of circulation variables and also hydrological variables including precipitation, evaporation, and soil moisture are presented. Our goals are (1) to produce a benchmark documentation of the GLA GCM for future model improvements; (2) to examine systematic errors between the simulated and the observed circulation, precipitation, and hydrologic cycle; (3) to examine the interannual variability of the simulated atmosphere and compare it with observation; and (4) to examine the ability of the model to capture the major climate anomalies in response to events such as El Nino and La Nina. The 10-year mean seasonal and annual simulated circulation is quite reasonable compared to the analyzed circulation, except the polar regions and area of high orography. Precipitation over tropics are quite well simulated, and the signal of El Nino/La Nina episodes can be easily identified. The time series of evaporation and soil moisture in the 12 biomes of the biosphere also show reasonable patterns compared to the estimated evaporation and soil moisture.

  6. Estimating comparable English healthcare costs for multiple diseases and unrelated future costs for use in health and public health economic modelling.

    PubMed

    Briggs, Adam D M; Scarborough, Peter; Wolstenholme, Jane

    2018-01-01

    Healthcare interventions, and particularly those in public health may affect multiple diseases and significantly prolong life. No consensus currently exists for how to estimate comparable healthcare costs across multiple diseases for use in health and public health cost-effectiveness models. We aim to describe a method for estimating comparable disease specific English healthcare costs as well as future healthcare costs from diseases unrelated to those modelled. We use routine national datasets including programme budgeting data and cost curves from NHS England to estimate annual per person costs for diseases included in the PRIMEtime model as well as age and sex specific costs due to unrelated diseases. The 2013/14 annual cost to NHS England per prevalent case varied between £3,074 for pancreatic cancer and £314 for liver disease. Costs due to unrelated diseases increase with age except for a secondary peak at 30-34 years for women reflecting maternity resource use. The methodology described allows health and public health economic modellers to estimate comparable English healthcare costs for multiple diseases. This facilitates the direct comparison of different health and public health interventions enabling better decision making.

  7. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias

    With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less

  8. Numeric, Agent-based or System Dynamics Model? Which Modeling Approach is the Best for Vast Population Simulation?

    PubMed

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-01-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25% of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  10. Role of meteorology in simulating methane seasonal cycle and growth rate

    NASA Astrophysics Data System (ADS)

    Ghosh, A.; Patra, P. K.; Ishijima, K.; Morimoto, S.; Aoki, S.; Nakazawa, T.

    2012-12-01

    Methane (CH4) is the second most important anthropogenically produced greenhouse gas whose radiative effect is comparable to that of carbon dioxide since the preindustrial time. Methane also contributes to formation of tropospheric ozone and water vapor in the stratosphere, further increasing its importance to the Earth's radiative balance. In the present study, model simulation of CH4 for three different emission scenarios has been conducted using the CCSR/NIES/FRCGC Atmospheric General Circulation Model (AGCM) based Chemistry Transport Model (ACTM) with and without nudging of meteorological parameters for the period of 1981-2011. The model simulations are compared with measurements at monthly timescale at surface monitoring stations. We show the overall trends in CH4 growth rate and seasonal cycle at most measurement sites can be fairly successfully modeled by using existing knowledge of CH4 flux trends and seasonality. Detailed analysis reveals the model simulation without nudging has greater seasonal cycle amplitude compared to observation as well as the model simulation with nudging. The growth rate is slightly overestimated for the model simulation without nudging. For better representation of regional/global flux distribution pattern and strength in the future, we are exploring various dynamical and chemical aspects in the forward model with and without nudging.

  11. Evaluation of sticky traps for adult Aedes mosquitoes in Malaysia: a potential monitoring and surveillance tool for the efficacy of control strategies.

    PubMed

    Roslan, Muhammad Aidil; Ngui, Romano; Vythilingam, Indra; Sulaiman, Wan Yusoff Wan

    2017-12-01

    The present study compared the performance of sticky traps in order to identify the most effective and practical trap for capturing Aedes aegypti and Aedes albopictus mosquitoes. Three phases were conducted in the study, with Phase 1 evaluating the five prototypes (Models A, B, C, D, and E) of sticky trap release-and-recapture using two groups of mosquito release numbers (five and 50) that were released in each replicate. Similarly, Phase 2 compared the performance between Model E and the classical ovitrap that had been modified (sticky ovitrap), using five and 50 mosquito release numbers. Further assessment of both traps was carried out in Phase 3, in which both traps were installed in nine sampling grids. Results from Phase 1 showed that Model E was the trap that recaptured higher numbers of mosquitoes when compared to Models A, B, C, and D. Further assessment between Model E and the modified sticky ovitrap (known as Model F) found that Model F outperformed Model E in both Phases 2 and 3. Thus, Model F was selected as the most effective and practical sticky trap, which could serve as an alternative tool for monitoring and controlling dengue vectors in Malaysia. © 2017 The Society for Vector Ecology.

  12. A warm-season comparison of WRF coupled to the CLM4.0, Noah-MP, and Bucket hydrology land surface schemes over the central USA

    NASA Astrophysics Data System (ADS)

    Van Den Broeke, Matthew S.; Kalin, Andrew; Alavez, Jose Abraham Torres; Oglesby, Robert; Hu, Qi

    2017-11-01

    In climate modeling studies, there is a need to choose a suitable land surface model (LSM) while adhering to available resources. In this study, the viability of three LSM options (Community Land Model version 4.0 [CLM4.0], Noah-MP, and the five-layer thermal diffusion [Bucket] scheme) in the Weather Research and Forecasting model version 3.6 (WRF3.6) was examined for the warm season in a domain centered on the central USA. Model output was compared to Parameter-elevation Relationships on Independent Slopes Model (PRISM) data, a gridded observational dataset including mean monthly temperature and total monthly precipitation. Model output temperature, precipitation, latent heat (LH) flux, sensible heat (SH) flux, and soil water content (SWC) were compared to observations from sites in the Central and Southern Great Plains region. An overall warm bias was found in CLM4.0 and Noah-MP, with a cool bias of larger magnitude in the Bucket model. These three LSMs produced similar patterns of wet and dry biases. Model output of SWC and LH/SH fluxes were compared to observations, and did not show a consistent bias. Both sophisticated LSMs appear to be viable options for simulating the effects of land use change in the central USA.

  13. LDR vs. HDR brachytherapy for localized prostate cancer: the view from radiobiological models.

    PubMed

    King, Christopher R

    2002-01-01

    Permanent LDR brachytherapy and temporary HDR brachytherapy are competitive techniques for clinically localized prostate radiotherapy. Although a randomized trial will likely never be conducted comparing these two forms of brachytherapy, a comparative radiobiological modeling analysis proves useful in understanding some of their intrinsic differences, several of which could be exploited to improve outcomes. Radiobiological models based upon the linear quadratic equations are presented for fractionated external beam, fractionated (192)Ir HDR brachytherapy, and (125)I and (103)Pd LDR brachytherapy. These models incorporate the dose heterogeneities present in brachytherapy based upon patient-derived dose volume histograms (DVH) as well as tumor doubling times and repair kinetics. Radiobiological parameters are normalized to correspond to three accepted clinical risk factors based upon T-stage, PSA, and Gleason score to compare models with clinical series. Tumor control probabilities (TCP) for LDR and HDR brachytherapy (as monotherapy or combined with external beam) are compared with clinical bNED survival rates. Predictions are made for dose escalation with HDR brachytherapy regimens. Model predictions for dose escalation with external beam agree with clinical data and validate the models and their underlying assumptions. Both LDR and HDR brachytherapy achieve superior tumor control when compared with external beam at conventional doses (<70 Gy), but similar to results from dose escalation series. LDR brachytherapy as boost achieves superior tumor control than when used as monotherapy. Stage for stage, both LDR and current HDR regimens achieve similar tumor control rates, in agreement with current clinical data. HDR monotherapy with large-dose fraction sizes might achieve superior tumor control compared with LDR, especially if prostate cancer possesses a high sensitivity to dose fractionation (i.e., if the alpha/beta ratio is low). Radiobiological models support the current clinical evidence for equivalent outcomes in localized prostate cancer with either LDR or HDR brachytherapy using current dose regimens. However, HDR brachytherapy dose escalation regimens might be able to achieve higher biologically effective doses of irradiation in comparison to LDR, and hence improved outcomes. This advantage over LDR would be amplified should prostate cancer possess a high sensitivity to dose fractionation (i.e., a low alpha/beta ratio) as the current evidence suggests.

  14. Assessment of an Explicit Algebraic Reynolds Stress Model

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2005-01-01

    This study assesses an explicit algebraic Reynolds stress turbulence model in the in the three-dimensional Reynolds averaged Navier-Stokes (RANS) solver, ISAAC (Integrated Solution Algorithm for Arbitrary Con gurations). Additionally, it compares solutions for two select configurations between ISAAC and the RANS solver PAB3D. This study compares with either direct numerical simulation data, experimental data, or empirical models for several different geometries with compressible, separated, and high Reynolds number flows. In general, the turbulence model matched data or followed experimental trends well, and for the selected configurations, the computational results of ISAAC closely matched those of PAB3D using the same turbulence model.

  15. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  16. Transport of Solar Wind Fluctuations: A Two-Component Model

    NASA Technical Reports Server (NTRS)

    Oughton, S.; Matthaeus, W. H.; Smith, C. W.; Breech, B.; Isenberg, P. A.

    2011-01-01

    We present a new model for the transport of solar wind fluctuations which treats them as two interacting incompressible components: quasi-two-dimensional turbulence and a wave-like piece. Quantities solved for include the energy, cross helicity, and characteristic transverse length scale of each component, plus the proton temperature. The development of the model is outlined and numerical solutions are compared with spacecraft observations. Compared to previous single-component models, this new model incorporates a more physically realistic treatment of fluctuations induced by pickup ions and yields improved agreement with observed values of the correlation length, while maintaining good observational accord with the energy, cross helicity, and temperature.

  17. Comparison of three GIS-based models for predicting rockfall runout zones at a regional scale

    NASA Astrophysics Data System (ADS)

    Dorren, Luuk K. A.; Seijmonsbergen, Arie C.

    2003-11-01

    Site-specific information about the level of protection that mountain forests provide is often not available for large regions. Information regarding rockfalls is especially scarce. The most efficient way to obtain information about rockfall activity and the efficacy of protection forests at a regional scale is to use a simulation model. At present, it is still unknown which forest parameters could be incorporated best in such models. Therefore, the purpose of this study was to test and evaluate a model for rockfall assessment at a regional scale in which simple forest stand parameters, such as the number of trees per hectare and the diameter at breast height, are incorporated. Therefore, a newly developed Geographical Information System (GIS)-based distributed model is compared with two existing rockfall models. The developed model is the only model that calculates the rockfall velocity on the basis of energy loss due to collisions with trees and on the soil surface. The two existing models calculate energy loss over the distance between two cell centres, while the newly developed model is able to calculate multiple bounces within a pixel. The patterns of rockfall runout zones produced by the three models are compared with patterns of rockfall deposits derived from geomorphological field maps. Furthermore, the rockfall velocities modelled by the three models are compared. It is found that the models produced rockfall runout zone maps with rather similar accuracies. However, the developed model performs best on forested hillslopes and it also produces velocities that match best with field estimates on both forested and nonforested hillslopes irrespective of the slope gradient.

  18. Support vector machine in crash prediction at the level of traffic analysis zones: Assessing the spatial proximity effects.

    PubMed

    Dong, Ni; Huang, Helai; Zheng, Liang

    2015-09-01

    In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Comparing ethics education in medicine and law: combining the best of both worlds.

    PubMed

    Egan, Erin A; Parsi, Kayhan; Ramirez, Cynthia

    2004-01-01

    This article compares various models of ethics education and how these models are employed by both medical schools and law schools. The authors suggest ways in which each profession can enhance their ethical teaching and argue that ethics education in both medicine and law should combine the best elements of each education model, thereby producing graduates who are more knowledgeable and appreciative of ethical issues in practice.

  20. Comparing models of Red Knot population dynamics

    USGS Publications Warehouse

    McGowan, Conor P.

    2015-01-01

    Predictive population modeling contributes to our basic scientific understanding of population dynamics, but can also inform management decisions by evaluating alternative actions in virtual environments. Quantitative models mathematically reflect scientific hypotheses about how a system functions. In Delaware Bay, mid-Atlantic Coast, USA, to more effectively manage horseshoe crab (Limulus polyphemus) harvests and protect Red Knot (Calidris canutus rufa) populations, models are used to compare harvest actions and predict the impacts on crab and knot populations. Management has been chiefly driven by the core hypothesis that horseshoe crab egg abundance governs the survival and reproduction of migrating Red Knots that stopover in the Bay during spring migration. However, recently, hypotheses proposing that knot dynamics are governed by cyclical lemming dynamics garnered some support in data analyses. In this paper, I present alternative models of Red Knot population dynamics to reflect alternative hypotheses. Using 2 models with different lemming population cycle lengths and 2 models with different horseshoe crab effects, I project the knot population into the future under environmental stochasticity and parametric uncertainty with each model. I then compare each model's predictions to 10 yr of population monitoring from Delaware Bay. Using Bayes' theorem and model weight updating, models can accrue weight or support for one or another hypothesis of population dynamics. With 4 models of Red Knot population dynamics and only 10 yr of data, no hypothesis clearly predicted population count data better than another. The collapsed lemming cycle model performed best, accruing ~35% of the model weight, followed closely by the horseshoe crab egg abundance model, which accrued ~30% of the weight. The models that predicted no decline or stable populations (i.e. the 4-yr lemming cycle model and the weak horseshoe crab effect model) were the most weakly supported.

  1. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  2. Comparing Amazon Basin CO2 fluxes from an atmospheric inversion with TRENDY biosphere models

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, N. S.; Alden, C. B.; Harper, A. B.; Ahlström, A.; Touma, D. E.; Miller, J. B.; Gatti, L. V.; Gloor, M.

    2015-12-01

    Net exchange of carbon dioxide (CO2) between the atmosphere and the terrestrial biosphere is sensitive to environmental conditions, including extreme heat and drought. Of particular importance for local and global carbon balance and climate are the expansive tracts of tropical rainforest located in the Amazon Basin. Because of the Basin's size and ecological heterogeneity, net biosphere CO2 exchange with the atmosphere remains largely un-constrained. In particular, the response of net CO2 exchange to changes in environmental conditions such as temperature and precipitation are not yet well known. However, proper representation of these relationships in biosphere models is a necessary constraint for accurately modeling future climate and climate-carbon cycle feedbacks. In an effort to compare biosphere response to climate across different biosphere models, the TRENDY model intercomparison project coordinated the simulation of CO2 fluxes between the biosphere and atmosphere, in response to historical climate forcing, by 9 different Dynamic Global Vegetation Models. We examine the TRENDY model results in the Amazon Basin, and compare this "bottom-up" method with fluxes derived from a "top-down" approach to estimating net CO2 fluxes, obtained through atmospheric inverse modeling using CO2 measurements sampled by aircraft above the basin. We compare the "bottom-up" and "top-down" fluxes in 5 sub-regions of the Amazon basin on a monthly basis for 2010-2012. Our results show important periods of agreement between some models in the TRENDY suite and atmospheric inverse model results, notably the simulation of increased biosphere CO2 loss during wet season heat in the Central Amazon. During the dry season, however, model ability to simulate observed response of net CO2 exchange to drought was varied, with few models able to reproduce the "top-down" inversion flux signals. Our results highlight the value of atmospheric trace gas observations for helping to narrow the possibilities of future carbon-climate interactions, especially in historically under-observed regions like the Amazon.

  3. Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.

    2018-01-01

    The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast predictions in large-scale hyper-resolution operational frameworks, without completely overriding hydrodynamic models' efficacy.

  4. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  5. Improved capacity to evaluate changes in intestinal mucosal surface area using mathematical modeling.

    PubMed

    Greig, Chasen J; Cowles, Robert A

    2017-07-01

    Quantification of intestinal mucosal growth typically relies on morphometric parameters, commonly villus height, as a surrogate for presumed changes in mucosal surface area (MSA). We hypothesized that using mathematical modeling based on multiple unique measurements would improve discrimination of the effects of interventions on MSA compared to standard measures. To determine the ability of mathematical modeling to resolve differences in MSA, a mouse model with enhanced serotonin (5HT) signaling known to stimulate mucosal growth was used. 5-HT signaling is potentiated by targeting the serotonin reuptake transporter (SERT) molecule. Selective serotonin reuptake inhibitor-treated wild-type (WT-SSRI), SERT-knockout (SERTKO), and wild-type C57Bl/6 (WT) mice were used. Distal ileal sections were H&E-stained. Villus height (VH), width (VW), crypt width (CW), and bowel diameter were used to calculate surface area enlargement factor (SEF) and MSA. VH alone for SERTKO and SSRI was significantly increased compared to WT, without a difference between SERTKO and WT-SSRI. VW and CW were significantly decreased for both SERTKO and WT-SSRI compared to WT, and VW for WT-SSRI was also decreased compared to SERTKO. These changes increased SEF and MSA for SERTKO and WT-SSRI compared to WT. Additionally, SEF and MSA were significantly increased for WT-SSRI compared to SERTKO. Mathematical modeling provides a valuable tool for differentiating changes in intestinal MSA. This more comprehensive assessment of surface area does not appear to correlate linearly with standard morphometric measures and represents a more comprehensive method for discriminating between therapies aimed at increasing functional intestinal mucosa. © 2017 Wiley Periodicals, Inc.

  6. Conceptualizations of Creativity: Comparing Theories and Models of Giftedness

    ERIC Educational Resources Information Center

    Miller, Angie L.

    2012-01-01

    This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…

  7. IRT Equating of the MCAT. MCAT Monograph.

    ERIC Educational Resources Information Center

    Hendrickson, Amy B.; Kolen, Michael J.

    This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked…

  8. Comparing the High School English Curriculum in Turkey through Multi-Analysis

    ERIC Educational Resources Information Center

    Batdi, Veli

    2017-01-01

    This study aimed to compare the High School English Curriculum (HSEC) in accordance with Stufflebeam's context, input, process and product (CIPP) model through multi-analysis. The research includes both quantitative and qualitative aspects. A descriptive analysis was operated through Rasch Measurement Model; SPSS program for the quantitative…

  9. Developing a Drosophila Model of Schwannomatosis

    DTIC Science & Technology

    2012-08-01

    the entire Drosophila melanogaster genome and compared...et al., 2009; Hanahan and Weinberg, 2011). Over the last decade, the fruit fly Drosophila melanogaster has become an important model system for cancer...studies. Reduced redundancy in the Drosophila genome compared with that of humans, coupled with the ability to conduct large-scale genetic screens

  10. A comparative study between nonlinear regression and nonparametric approaches for modelling Phalaris paradoxa seedling emergence

    USDA-ARS?s Scientific Manuscript database

    Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...

  11. Flip This Classroom: A Comparative Study

    ERIC Educational Resources Information Center

    Unruh, Tiffany; Peters, Michelle L.; Willis, Jana

    2016-01-01

    The purpose of this research was to compare the beliefs and attitudes of teachers using the flipped versus the traditional class model. Survey and interview data were collected from a matched sample of in-service teachers representing both models from a large suburban southeastern Texas school district. The Attitude Towards Technology Scale, the…

  12. Comparative Transcriptomes and EVO-DEVO Studies Depending on Next Generation Sequencing.

    PubMed

    Liu, Tiancheng; Yu, Lin; Liu, Lei; Li, Hong; Li, Yixue

    2015-01-01

    High throughput technology has prompted the progressive omics studies, including genomics and transcriptomics. We have reviewed the improvement of comparative omic studies, which are attributed to the high throughput measurement of next generation sequencing technology. Comparative genomics have been successfully applied to evolution analysis while comparative transcriptomics are adopted in comparison of expression profile from two subjects by differential expression or differential coexpression, which enables their application in evolutionary developmental biology (EVO-DEVO) studies. EVO-DEVO studies focus on the evolutionary pressure affecting the morphogenesis of development and previous works have been conducted to illustrate the most conserved stages during embryonic development. Old measurements of these studies are based on the morphological similarity from macro view and new technology enables the micro detection of similarity in molecular mechanism. Evolutionary model of embryo development, which includes the "funnel-like" model and the "hourglass" model, has been evaluated by combination of these new comparative transcriptomic methods with prior comparative genomic information. Although the technology has promoted the EVO-DEVO studies into a new era, technological and material limitation still exist and further investigations require more subtle study design and procedure.

  13. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines

    PubMed Central

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J.; Li, Ming

    2013-01-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables. PMID:22552787

  14. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines.

    PubMed

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L

    2012-09-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.

  15. Hypoxia-Targeting Drug Evofosfamide (TH-302) Enhances Sunitinib Activity in Neuroblastoma Xenograft Models.

    PubMed

    Kumar, Sushil; Sun, Jessica D; Zhang, Libo; Mokhtari, Reza Bayat; Wu, Bing; Meng, Fanying; Liu, Qian; Bhupathi, Deepthi; Wang, Yan; Yeger, Herman; Hart, Charles; Baruchel, Sylvain

    2018-05-23

    Antiangiogenic therapy has shown promising results in preclinical and clinical trials. However, tumor cells acquire resistance to this therapy by gaining ability to survive and proliferate under hypoxia induced by antiangiogenic therapy. Combining antiangiogenic therapy with hypoxia-activated prodrugs can overcome this limitation. Here, we have tested the combination of antiangiogenic drug sunitinib in combination with hypoxia-activated prodrug evofosfamide in neuroblastoma. In vitro, neuroblastoma cell line SK-N-BE(2) was 40-folds sensitive to evofosfamide under hypoxia compared to normoxia. In IV metastatic model, evofosfamide significantly increased mice survival compared to the vehicle (P=.02). In SK-N-BE(2) subcutaneous xenograft model, we tested two different treatment regimens using 30 mg/kg sunitinib and 50 mg/kg evofosfamide. Here, sunitinib therapy when started along with evofosfamide treatment showed higher efficacy compared to single agents in subcutaneous SK-N-BE(2) xenograft model, whereas sunitinib when started 7 days after evofosfamide treatment did not have any advantage compared to treatment with either single agent. Immunofluorescence of tumor sections revealed higher number of apoptotic cells and hypoxic areas compared to either single agent when both treatments were started together. Treatment with 80 mg/kg sunitinib with 50 mg/kg evofosfamide was significantly superior to single agents in both xenograft and metastatic models. This study confirms the preclinical efficacy of sunitinib and evofosfamide in murine models of aggressive neuroblastoma. Sunitinib enhances the efficacy of evofosfamide by increasing hypoxic areas, and evofosfamide targets hypoxic tumor cells. Consequently, each drug enhances the activity of the other. Copyright © 2018. Published by Elsevier Inc.

  16. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  17. Comparison of a vertically-averaged and a vertically-resolved model for hyporheic flow beneath a pool-riffle bedform

    NASA Astrophysics Data System (ADS)

    Ibrahim, Ahmad; Steffler, Peter; She, Yuntong

    2018-02-01

    The interaction between surface water and groundwater through the hyporheic zone is recognized to be important as it impacts the water quantity and quality in both flow systems. Three-dimensional (3D) modeling is the most complete representation of a real-world hyporheic zone. However, 3D modeling requires extreme computational power and efforts; the sophistication is often significantly compromised by not being able to obtain the required input data accurately. Simplifications are therefore often needed. The objective of this study was to assess the accuracy of the vertically-averaged approximation compared to a more complete vertically-resolved model of the hyporheic zone. The groundwater flow was modeled by either a simple one-dimensional (1D) Dupuit approach or a two-dimensional (2D) horizontal/vertical model in boundary fitted coordinates, with the latter considered as a reference model. Both groundwater models were coupled with a 1D surface water model via the surface water depth. Applying the two models to an idealized pool-riffle sequence showed that the 1D Dupuit approximation gave comparable results in determining the characteristics of the hyporheic zone to the reference model when the stratum thickness is not very large compared to the surface water depth. Conditions under which the 1D model can provide reliable estimate of the seepage discharge, upwelling/downwelling discharges and locations, the hyporheic flow, and the residence time were determined.

  18. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images

    PubMed Central

    Kim, Jooseong

    2016-01-01

    Objective The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. Methods CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Results Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Conclusions Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis. PMID:26877978

  19. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    PubMed

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  1. Accuracy of short‐term sea ice drift forecasts using a coupled ice‐ocean model

    PubMed Central

    Zhang, Jinlun

    2015-01-01

    Abstract Arctic sea ice drift forecasts of 6 h–9 days for the summer of 2014 are generated using the Marginal Ice Zone Modeling and Assimilation System (MIZMAS); the model is driven by 6 h atmospheric forecasts from the Climate Forecast System (CFSv2). Forecast ice drift speed is compared to drifting buoys and other observational platforms. Forecast positions are compared with actual positions 24 h–8 days since forecast. Forecast results are further compared to those from the forecasts generated using an ice velocity climatology driven by multiyear integrations of the same model. The results are presented in the context of scheduling the acquisition of high‐resolution images that need to follow buoys or scientific research platforms. RMS errors for ice speed are on the order of 5 km/d for 24–48 h since forecast using the sea ice model compared with 9 km/d using climatology. Predicted buoy position RMS errors are 6.3 km for 24 h and 14 km for 72 h since forecast. Model biases in ice speed and direction can be reduced by adjusting the air drag coefficient and water turning angle, but the adjustments do not affect verification statistics. This suggests that improved atmospheric forecast forcing may further reduce the forecast errors. The model remains skillful for 8 days. Using the forecast model increases the probability of tracking a target drifting in sea ice with a 10 km × 10 km image from 60 to 95% for a 24 h forecast and from 27 to 73% for a 48 h forecast. PMID:27818852

  2. The economic case for digital interventions for eating disorders among United States college students.

    PubMed

    Kass, Andrea E; Balantekin, Katherine N; Fitzsimmons-Craft, Ellen E; Jacobi, Corinna; Wilfley, Denise E; Taylor, C Barr

    2017-03-01

    Eating disorders (EDs) are serious health problems affecting college students. This article aimed to estimate the costs, in United States (US) dollars, of a stepped care model for online prevention and treatment among US college students to inform meaningful decisions regarding resource allocation and adoption of efficient care delivery models for EDs on college campuses. Using a payer perspective, we estimated the costs of (1) delivering an online guided self-help (GSH) intervention to individuals with EDs, including the costs of "stepping up" the proportion expected to "fail"; (2) delivering an online preventive intervention compared to a "wait and treat" approach to individuals at ED risk; and (3) applying the stepped care model across a population of 1,000 students, compared to standard care. Combining results for online GSH and preventive interventions, we estimated a stepped care model would cost less and result in fewer individuals needing in-person psychotherapy (after receiving less-intensive intervention) compared to standard care, assuming everyone in need received intervention. A stepped care model was estimated to achieve modest cost savings compared to standard care, but these estimates need to be tested with sensitivity analyses. Model assumptions highlight the complexities of cost calculations to inform resource allocation, and considerations for a disseminable delivery model are presented. Efforts are needed to systematically measure the costs and benefits of a stepped care model for EDs on college campuses, improve the precision and efficacy of ED interventions, and apply these calculations to non-US care systems with different cost structures. © 2017 Wiley Periodicals, Inc.

  3. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  4. Identifying the most appropriate age threshold for TNM stage grouping of well-differentiated thyroid cancer.

    PubMed

    Hendrickson-Rebizant, J; Sigvaldason, H; Nason, R W; Pathak, K A

    2015-08-01

    Age is integrated in most risk stratification systems for well-differentiated thyroid cancer (WDTC). The most appropriate age threshold for stage grouping of WDTC is debatable. The objective of this study was to evaluate the best age threshold for stage grouping by comparing multivariable models designed to evaluate the independent impact of various prognostic factors, including age based stage grouping, on the disease specific survival (DSS) of our population-based cohort. Data from population-based thyroid cancer cohort of 2125 consecutive WDTC, diagnosed during 1970-2010, with a median follow-up of 11.5 years, was used to calculate DSS using the Kaplan Meier method. Multivariable analysis with Cox proportional hazard model was used to assess independent impact of different prognostic factors on DSS. The Akaike information criterion (AIC), a measure of statistical model fit, was used to identify the most appropriate age threshold model. Delta AIC, Akaike weight, and evidence ratios were calculated to compare the relative strength of different models. The mean age of the patients was 47.3 years. DSS of the cohort was 95.6% and 92.8% at 10 and 20 years respectively. A threshold of 55 years, with the lowest AIC, was identified as the best model. Akaike weight indicated an 85% chance that this age threshold is the best among the compared models, and is 16.8 times more likely to be the best model as compared to a threshold of 45 years. The age threshold of 55 years was found to be the best for TNM stage grouping. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Reduced Protection for Belted Occupants in Rear Seats Relative to Front Seats of New Model Year Vehicles

    PubMed Central

    Sahraei, Elham; Digges, Kennerly; Marzougui, Dhafer

    2010-01-01

    Effectiveness of the rear seat in protecting occupants of different age groups in frontal crashes for 2000–2009 model years (MY) of vehicles was estimated and compared to 1990–1999 model years of vehicles. The objective was to determine the effectiveness of the rear seat compared to the front seat for various age groups in newer model year vehicles. The double paired comparison method was used to estimate relative effectiveness. For belted adults of the 25–49 age group, the fatality reduction effectiveness of the rear seat compared to the right front seat was 25 % (CI 11% to 36%), in the 1990–1999 model year vehicles. The relative effectiveness was −31% (CI −63% to −5%) for the same population, in the 2000–2009 model year vehicles. For restrained children 0–8 years old, the relative effectiveness was 55% (CI 48% to 61%) when the vehicles were of the 1990–1999 period. The level of effectiveness for this age group was reduced to 25% (CI −4% to 46%) in the 2000–2009 MYs of vehicles. Results for other age groups of belted occupants have followed a similar trend. All belted adult occupants of 25+ years old were significantly less protected in rear seats as compared to right front seats in the 2000–2009 model years of vehicles. For unbelted occupants however, rear seats were still a safer position than front seats, even in the 2000–2009 model years of vehicles. PMID:21050599

  6. Evaluation of terrestrial carbon cycle models for their response to climate variability and to CO2 trends.

    PubMed

    Piao, Shilong; Sitch, Stephen; Ciais, Philippe; Friedlingstein, Pierre; Peylin, Philippe; Wang, Xuhui; Ahlström, Anders; Anav, Alessandro; Canadell, Josep G; Cong, Nan; Huntingford, Chris; Jung, Martin; Levis, Sam; Levy, Peter E; Li, Junsheng; Lin, Xin; Lomas, Mark R; Lu, Meng; Luo, Yiqi; Ma, Yuecun; Myneni, Ranga B; Poulter, Ben; Sun, Zhenzhong; Wang, Tao; Viovy, Nicolas; Zaehle, Soenke; Zeng, Ning

    2013-07-01

    The purpose of this study was to evaluate 10 process-based terrestrial biosphere models that were used for the IPCC fifth Assessment Report. The simulated gross primary productivity (GPP) is compared with flux-tower-based estimates by Jung et al. [Journal of Geophysical Research 116 (2011) G00J07] (JU11). The net primary productivity (NPP) apparent sensitivity to climate variability and atmospheric CO2 trends is diagnosed from each model output, using statistical functions. The temperature sensitivity is compared against ecosystem field warming experiments results. The CO2 sensitivity of NPP is compared to the results from four Free-Air CO2 Enrichment (FACE) experiments. The simulated global net biome productivity (NBP) is compared with the residual land sink (RLS) of the global carbon budget from Friedlingstein et al. [Nature Geoscience 3 (2010) 811] (FR10). We found that models produce a higher GPP (133 ± 15 Pg C yr(-1) ) than JU11 (118 ± 6 Pg C yr(-1) ). In response to rising atmospheric CO2 concentration, modeled NPP increases on average by 16% (5-20%) per 100 ppm, a slightly larger apparent sensitivity of NPP to CO2 than that measured at the FACE experiment locations (13% per 100 ppm). Global NBP differs markedly among individual models, although the mean value of 2.0 ± 0.8 Pg C yr(-1) is remarkably close to the mean value of RLS (2.1 ± 1.2 Pg C yr(-1) ). The interannual variability in modeled NBP is significantly correlated with that of RLS for the period 1980-2009. Both model-to-model and interannual variation in model GPP is larger than that in model NBP due to the strong coupling causing a positive correlation between ecosystem respiration and GPP in the model. The average linear regression slope of global NBP vs. temperature across the 10 models is -3.0 ± 1.5 Pg C yr(-1) °C(-1) , within the uncertainty of what derived from RLS (-3.9 ± 1.1 Pg C yr(-1) °C(-1) ). However, 9 of 10 models overestimate the regression slope of NBP vs. precipitation, compared with the slope of the observed RLS vs. precipitation. With most models lacking processes that control GPP and NBP in addition to CO2 and climate, the agreement between modeled and observation-based GPP and NBP can be fortuitous. Carbon-nitrogen interactions (only separable in one model) significantly influence the simulated response of carbon cycle to temperature and atmospheric CO2 concentration, suggesting that nutrients limitations should be included in the next generation of terrestrial biosphere models. © 2013 Blackwell Publishing Ltd.

  7. 3D Modeling Techniques for Print and Digital Media

    NASA Astrophysics Data System (ADS)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  8. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  9. Getting ahead: forward models and their place in cognitive architecture.

    PubMed

    Pickering, Martin J; Clark, Andy

    2014-09-01

    The use of forward models (mechanisms that predict the future state of a system) is well established in cognitive and computational neuroscience. We compare and contrast two recent, but interestingly divergent, accounts of the place of forward models in the human cognitive architecture. On the Auxiliary Forward Model (AFM) account, forward models are special-purpose prediction mechanisms implemented by additional circuitry distinct from core mechanisms of perception and action. On the Integral Forward Model (IFM) account, forward models lie at the heart of all forms of perception and action. We compare these neighbouring but importantly different visions and consider their implications for the cognitive sciences. We end by asking what kinds of empirical research might offer evidence favouring one or the other of these approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Comparison of Damage Models for Predicting the Non-Linear Response of Laminates Under Matrix Dominated Loading Conditions

    NASA Technical Reports Server (NTRS)

    Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.

    2010-01-01

    Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.

  11. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  12. Modelling the effect of structural QSAR parameters on skin penetration using genetic programming

    NASA Astrophysics Data System (ADS)

    Chung, K. K.; Do, D. Q.

    2010-09-01

    In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.

  13. A comparison of the Full Outline of UnResponsiveness (FOUR) score and Glasgow Coma Score (GCS) in predictive modelling in traumatic brain injury.

    PubMed

    Kasprowicz, Magdalena; Burzynska, Malgorzata; Melcer, Tomasz; Kübler, Andrzej

    2016-01-01

    To compare the performance of multivariate predictive models incorporating either the Full Outline of UnResponsiveness (FOUR) score or Glasgow Coma Score (GCS) in order to test whether substituting GCS with the FOUR score in predictive models for outcome in patients after TBI is beneficial. A total of 162 TBI patients were prospectively enrolled in the study. Stepwise logistic regression analysis was conducted to compare the prediction of (1) in-ICU mortality and (2) unfavourable outcome at 3 months post-injury using as predictors either the FOUR score or GCS along with other factors that may affect patient outcome. The areas under the ROC curves (AUCs) were used to compare the discriminant ability and predictive power of the models. The internal validation was performed with bootstrap technique and expressed as accuracy rate (AcR). The FOUR score, age, the CT Rotterdam score, systolic ABP and being placed on ventilator within day one (model 1: AUC: 0.906 ± 0.024; AcR: 80.3 ± 4.8%) performed equally well in predicting in-ICU mortality as the combination of GCS with the same set of predictors plus pupil reactivity (model 2: AUC: 0.913 ± 0.022; AcR: 81.1 ± 4.8%). The CT Rotterdam score, age and either the FOUR score (model 3) or GCS (model 4) equally well predicted unfavourable outcome at 3 months post-injury (AUC: 0.852 ± 0.037 vs. 0.866 ± 0.034; AcR: 72.3 ± 6.6% vs. 71.9%±6.6%, respectively). Adding the FOUR score or GCS at discharge from ICU to predictive models for unfavourable outcome increased significantly their performances (AUC: 0.895 ± 0.029, p = 0.05; AcR: 76.1 ± 6.5%; p < 0.004 when compared with model 3; and AUC: 0.918 ± 0.025, p < 0.05; AcR: 79.6 ± 7.2%, p < 0.009 when compared with model 4), but there was no benefit from substituting GCS with the FOUR score. Results showed that FOUR score and GCS perform equally well in multivariate predictive modelling in TBI.

  14. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  15. Predicting wetland plant community responses to proposed water-level-regulation plans for Lake Ontario: GIS-based modeling

    USGS Publications Warehouse

    Wilcox, D.A.; Xie, Y.

    2007-01-01

    Integrated, GIS-based, wetland predictive models were constructed to assist in predicting the responses of wetland plant communities to proposed new water-level regulation plans for Lake Ontario. The modeling exercise consisted of four major components: 1) building individual site wetland geometric models; 2) constructing generalized wetland geometric models representing specific types of wetlands (rectangle model for drowned river mouth wetlands, half ring model for open embayment wetlands, half ellipse model for protected embayment wetlands, and ellipse model for barrier beach wetlands); 3) assigning wetland plant profiles to the generalized wetland geometric models that identify associations between past flooding / dewatering events and the regulated water-level changes of a proposed water-level-regulation plan; and 4) predicting relevant proportions of wetland plant communities and the time durations during which they would be affected under proposed regulation plans. Based on this conceptual foundation, the predictive models were constructed using bathymetric and topographic wetland models and technical procedures operating on the platform of ArcGIS. An example of the model processes and outputs for the drowned river mouth wetland model using a test regulation plan illustrates the four components and, when compared against other test regulation plans, provided results that met ecological expectations. The model results were also compared to independent data collected by photointerpretation. Although data collections were not directly comparable, the predicted extent of meadow marsh in years in which photographs were taken was significantly correlated with extent of mapped meadow marsh in all but barrier beach wetlands. The predictive model for wetland plant communities provided valuable input into International Joint Commission deliberations on new regulation plans and was also incorporated into faunal predictive models used for that purpose.

  16. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    PubMed Central

    Hüls, Anke; Frömke, Cornelia; Ickstadt, Katja; Hille, Katja; Hering, Johanna; von Münchhausen, Christiane; Hartmann, Maria; Kreienbrock, Lothar

    2017-01-01

    Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i) to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model) and (ii) to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate model. PMID:28620609

  17. Evaluation of GOCE-based Global Geoid Models in Finnish Territory

    NASA Astrophysics Data System (ADS)

    Saari, Timo; Bilker-Koivula, Mirjam

    2015-04-01

    The gravity satellite mission GOCE made its final observations in the fall of 2013. By then it had exceeded its expected lifespan of one year with more than three additional years. Thus, the mission collected more data from the Earth's gravitational field than expected, and more comprehensive global geoid models have been derived ever since. The GOCE High-level Processing Facility (HPF) by ESA has published GOCE global gravity field models annually. We compared all of the 12 HPF-models as well as 3 additional GOCE, 11 GRACE and 6 combined GOCE+GRACE models with GPS-levelling data and gravity observations in Finland. The most accurate models were compared against high resolution global geoid models EGM96 and EGM2008. The models were evaluated up to three different degrees and order: 150 (the common maximum for the GRACE models), 240 (the common maximum for the GOCE models) and maximum. When coefficients up to degree and order 150 are used, the results of the GOCE models are comparable with the results of the latest GRACE models. Generally, all of the latest GOCE and GOCE+GRACE models give standard deviations of the height anomaly differences of around 15 cm and of gravity anomaly differences of around 10 mgal over Finland. The best solutions were not always achieved with the highest maximum degree and order of the satellite gravity field models, since the highest coefficients (above 240) may be less accurately determined. Over Finland, the latest GOCE and GOCE+GRACE models give similar results as the high resolution models EGM96 and EGM2008 when coefficients up to degree and order 240 are used. This is mainly due to the high resolution terrestrial data available in the area of Finland, which was used in the high resolution models.

  18. Multi-scale modeling of tsunami flows and tsunami-induced forces

    NASA Astrophysics Data System (ADS)

    Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.

    2016-12-01

    The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.

  19. SPITFIRE within the MPI Earth system model: Model development and evaluation

    NASA Astrophysics Data System (ADS)

    Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia

    2014-09-01

    Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.

  20. Analysis of a DNA simulation model through hairpin melting experiments.

    PubMed

    Linak, Margaret C; Dorfman, Kevin D

    2010-09-28

    We compare the predictions of a two-bead Brownian dynamics simulation model to melting experiments of DNA hairpins with complementary AT or GC stems and noninteracting loops in buffer A. This system emphasizes the role of stacking and hydrogen bonding energies, which are characteristics of DNA, rather than backbone bending, stiffness, and excluded volume interactions, which are generic characteristics of semiflexible polymers. By comparing high throughput data on the open-close transition of various DNA hairpins to the corresponding simulation data, we (1) establish a suitable metric to compare the simulations to experiments, (2) find a conversion between the simulation and experimental temperatures, and (3) point out several limitations of the model, including the lack of G-quartets and cross stacking effects. Our approach and experimental data can be used to validate similar coarse-grained simulation models.

Top