Boisson, Sylvain; Le Stradic, Soizig; Collignon, Julien; Séleck, Maxime; Malaisse, François; Ngoy Shutcha, Mylor; Faucon, Michel-Pierre; Mahy, Grégory
2016-07-01
Phytostabilisation (i.e. using plants to immobilise contaminants) represents a well-known technology to hamper heavy metal spread across landscapes. Southeastern D.R. Congo, Microchloa altera, a tolerant grass from the copper hills, was recently identified as a candidate species to stabilise copper in the soil. More than 50 grasses compose this flora, which may be studied to implement phytostabilisation strategies. However, little is known about their phenology, tolerance, reproductive strategy or demography. The present study aims to characterize the Poaceae that may be used in phytostabilisation purposes based on the following criteria: their ecological distribution, seed production at two times, abundance, soil coverage and the germination percentage of their seeds. We selected seven perennial Poaceae that occur on the copper hills. Their ecological distributions (i.e. species response curves) have been modelled along copper or cobalt gradients with generalised additive models using logic link based on 172 presence-absence samples on three sites. For other variables, a total of 69 quadrats (1 m(2)) were randomly placed across three sites and habitats. For each species, we compared the number of inflorescence-bearing stems (IBS) by plot, the percentage of cover, the number of seeds by IBS and the estimated number of seeds by plot between sites and habitat. Three species (Andropogon schirensis, Eragrostis racemosa and Loudetia simplex) were very interesting for phytostabilisation programs. They produced a large quantity of seeds and had the highest percentage of cover. However, A. schirensis and L. simplex presented significant variations in the number of seeds and the percentage of cover according to site.
Marjana Regvar; Matevz Likar; Andrej Piltaver; Nives Kugonic; Jane E. Smith
2010-01-01
Goat willow (Salix caprea L.) was selected in a previous vegetation screening study as a potential candidate for the later-stage phytostabilisation efforts at a heavily metal polluted site in Slovenia. The aims of this study were to identify the fungi colonising roots of S. caprea along the gradient of vegetation succession and...
Lopareva-Pohu, Alena; Verdin, Anthony; Garçon, Guillaume; Lounès-Hadj Sahraoui, Anissa; Pourrut, Bertrand; Debiane, Djouher; Waterlot, Christophe; Laruelle, Frédéric; Bidar, Géraldine; Douay, Francis; Shirali, Pirouz
2011-06-01
Due to anthropogenic activities, large extends of soils are highly contaminated by Metal Trace Element (MTE). Aided phytostabilisation aims to establish a vegetation cover in order to promote in situ immobilisation of trace elements by combining the use of metal-tolerant plants and inexpensive mineral or organic soil amendments. Eight years after Coal Fly Ash (CFA) soil amendment, MTE bioavailability and uptake by two plants, Lolium perenne and Trifolium repens, were evaluated, as some biological markers reflecting physiological stress. Results showed that the two plant species under study were suitable to reduce the mobility and the availability of these elements. Moreover, the plant growth was better on CFA amended MTE-contaminated soils, and the plant sensitivity to MTE-induced physiological stress, as studied through photosynthetic pigment contents and oxidative damage was lower or similar. In conclusion, these results supported the usefulness of aided phytostabilisation of MTE-highly contaminated soils. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pourrut, Bertrand; Lopareva-Pohu, Alena; Pruvot, Christelle; Garçon, Guillaume; Verdin, Anthony; Waterlot, Christophe; Bidar, Géraldine; Shirali, Pirouz; Douay, Francis
2011-10-01
Aided phytostabilisation is a cost-efficient technique to manage metal-contaminated areas, particularly in the presence of extensive pollution. Plant establishment and survival in highly metal-contaminated soils are crucial for phytostabilisation success, as metal toxicity for plants is widely reported. A relevant phytostabilisation solution must limit metal transfer through the food chain. Therefore, this study aimed at evaluating the long-term efficiency of aided phytostabilisation on former agricultural soils highly contaminated by cadmium, lead, and zinc. The influence of afforestation and fly ash amendments on reducing metal phytoavailability was investigated as were their effects on plant development. Before being planted with a tree mix, the site was divided into three plots: a reference plot with no amendment, a plot amended with silico-aluminous fly ash and one with sulfo-calcic fly ash. Unlike Salix alba and Quercus robur, Alnus glutinosa, Acer pseudoplatanus and Robinia pseudoacacia grew well on the site and accumulated, overall, quite low concentrations of metals in their leaves and young twigs. This suggests that these three species have an excluder phenotype for Cd, Zn and Pb. After 8 years, metal availability to A. glutinosa, A. pseudoplatanus and R. pseudoacacia, and translocation to their above-ground parts, strongly decreased in fly ash-amended soils. Such decreases fit well together with the depletion of CaCl(2)-extractable metals in amended soils. Although both fly ashes were effective to decrease Cd, Pb and Zn concentrations in above-ground parts of trees, the sulfo-calcic ash was more efficient. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Padmavathiamma, P. K.; Li, L. Y.
2009-04-01
This research addressed the phytoremediation of roadside soils subjected to multi-component metal solutions. A typical right of way for roads in Canada is around 30 m, and at least 33% of that land in the right of way is unpaved and can support animal life. Thus, land associated with 12,000 km of roads in the province of British Columbia and millions of kilometres around the world represent a substantial quantity of wildlife habitat where metal contamination needs to be remediated. Phytostabilisation, requires least maintenance among different phytoremediation techniques, and it could be a feasible and practical method of remediating in roadside soils along highways and for improving highway runoff drainage. The suitability of five plant species was studied for phytoextraction and phytostabilisation in a region with temperate maritime climate of coastal British Columbia, Canada. Pot experiments were conducted using Lolium perenne L (perennial rye grass), Festuca rubra L (creeping red fescue), Helianthus annuus L (sunflower), Poa pratensis L (Kentucky bluegrass) and Brassica napus L (rape) in soils treated with three different metal (Cu, Pb, Mn and Zn) concentrations. The bio-metric characters of plants in soils with multiple-metal contaminations, their metal accumulation characteristics, translocation properties and metal removal were assessed at different stages of plant growth, 90 and 120 DAS (days after sowing). Lolium was found to be suitable for the phytostabilisation of Cu and Pb, Festuca for Mn and Poa for Zn. Metal removal was higher at 120 than at 90 days after sowing, and metals concentrated more in the underground tissues with less translocation to the above-ground parts. Bioconcentration factors indicate that Festuca had the highest accumulation for Cu, Helianthus for Pb and Zn and Poa for Mn.
Mohanty, M; Dhal, N K; Patra, P; Das, B; Reddy, P S R
2012-01-01
The present pot culture study was carried out for the potential phytostabilisation of iron ore tailings using lemon grass (Cymbopogon flexuosus) a drought tolerant, perennial, aromatic grass. Experiments have been conducted by varying the composition of garden soil (control) with iron ore tailings. The various parameters, viz. growth of plants, number of tillers, biomass and oil content of lemon grass are evaluated. The studies have indicated that growth parameters of lemon grass in 1:1 composition of garden soil and iron ore tailings are significantly more (-5% increase) compared to plants grown in control soil. However, the oil content of lemon grass in both the cases more or less remained same. The results also infer that at higher proportion of tailings the yield of biomass decreases. The studies indicate that lemongrass with its fibrous root system is proved to be an efficient soil binder by preventing soil erosion.
Touceda-González, M; Álvarez-López, V; Prieto-Fernández, Á; Rodríguez-Garrido, B; Trasar-Cepeda, C; Mench, M; Puschenreiter, M; Quintela-Sabarís, C; Macías-García, F; Kidd, P S
2017-01-15
(Aided) phytostabilisation has been proposed as a suitable technique to decrease the environmental risks associated with metal(loid)-enriched mine tailings. Field scale evaluations are needed for demonstrating their effectiveness in the medium- to long-term. A field trial was implemented in spring 2011 in Cu-rich mine tailings in the NW of Spain. The tailings were amended with composted municipal solid wastes and planted with Salix spp., Populus nigra L. or Agrostis capillaris L. cv. Highland. Plant growth, nutritive status and metal accumulation, and soil physico- and bio-chemical properties, were monitored over three years (four years for plant growth). The total bacterial community, α- and β-Proteobacteria, Actinobacteria and Streptomycetaceae were studied by DGGE of 16s rDNA fragments. Compost amendment improved soil properties such as pH, CEC and fertility, and decreased soil Cu availability, leading to the establishment of a healthy vegetation cover. Both compost-amendment and plant root activity stimulated soil enzyme activities and induced important shifts in the bacterial community structure over time. The woody plant, S. viminalis, and the grassy species, A. capillaris, showed the best results in terms of plant growth and biomass production. The beneficial effects of the phytostabilisation process were maintained at least three years after treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Role of phyto-stabilised silver nanoparticles in suppressing adjuvant induced arthritis in rats.
Mani, Aparna; Vasanthi, C; Gopal, V; Chellathai, Darling
2016-12-01
The present study was aimed to evaluate the anti-arthritic effects of silver nanoparticles synthesised using Piper nigrum extract and to further establish its mechanism of action in a rat model of adjuvant induced arthritis (AA). Adjuvant arthritis was induced by injecting complete Freund's adjuvant (0.1mL) into the left hind paw of 36 albino Wistar rats (n=6). Silver nanoparticles stabilised with Piper nigrum extract (25 and 50mg/kg). Commercial silver nanoparticles (50mg/kg) and methotrexate (0.1mg/kg) were administered by intraperitoneal route from day 11 to day 22 on alternate days. It was found that treatment with silver nanoparticles stabilised with Piper nigrum (S-AgNPs) significantly reduced the paw edema and alleviated the histopathological changes of cell infiltration, synovial hyperplasia, bone and cartilage destruction. Furthermore, the phytostabilised silver nanoparticles (S-AgNPs) inhibited the protein expression of NF-kβ p65 and TNF-α as evidenced by immunohistochemistry analysis. Our current findings suggest that silver nanoparticles stabilised with Piper nigrum extract (S-AgNPs) have potent anti-arthritic activity which is mediated by inhibition of TNF-α and suppression of pro-inflammatory cytokines that are secreted in response to activated transcription factors of NF-kβ. Copyright © 2016 Elsevier B.V. All rights reserved.
Lambrechts, Thomas; Gustot, Quentin; Couder, Eléonore; Houben, David; Iserentant, Anne; Lutts, Stanley
2011-11-01
Phytoremediation is a promising and cost-effective strategy to manage heavy metal polluted sites. In this experiment, we compared simultaneously phytoextraction and phytostabilisation techniques on a Cd and Zn contaminated soil, through monitoring of plant accumulation and leaching. Lolium perenne plants were cultivated for 2 months under controlled environmental conditions in a 27.6 dm(3)-pot experiment allowing the collect of leachates. The heavy metal phytoextraction was promoted by adding Na-EDTA (0.5 g kg(-1) of soil) in watering solution. Phytostabilisation was assessed by mixing soil with steel shots (1%) before L. perenne sowing. Presence of plants exacerbated heavy metal leaching, by improving soil hydraulic conductivity. Use of EDTA for phytoextraction led to higher concentration of heavy metal in shoots. However, this higher heavy metal extraction was insufficient to satisfactory reduce the heavy metal content in soil, and led to important heavy metal leaching induced by EDTA. On the other hand, addition of steel shots efficiently decreased both Cd and Zn mobility, according to 0.01 M CaCl(2) extraction, and leaching. However, improvement of growth conditions by steel shots led to higher heavy metal mass in shoot tissues. Therefore, soil heavy metal mobility and plant metal uptake are not systematically positively correlated. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pardo, T; Bernal, M P; Clemente, R
2017-07-01
Phytostabilisation strategies have proven to be an efficient remediation option for mine tailings, but the adequate plant species and amendments have to be carefully selected. A remediation experiment was carried out at the semi-field level in tailings (pH 3.2, ≈1100, 4700 and 5000 mg kg -1 of As, Pb and Zn, respectively) from the mining district of La Unión-Cartagena (SE Spain). A red mud derivative (Fe/Al oxides), its combination with compost, and hydrated lime (Ca hydroxide) were applied in field plots of 0.25 m 2 . After four months of field stabilisation, tailings were transferred unaltered to a plant growth facility, and Atriplex halimus and Zygophyllum fabago (halophytes) were sown. Three months later, trace element (TE) solubility, plant accumulation and chemical speciation in the tailings pore water were studied. In unamended tailings, soluble TEs concentrations were very high (e.g., 40 mg Zn l -1 ), the dominant species being free ions and SO 4 2- - complexes (>70%). The addition of amendments increased tailings pH (6.7-7), reduced TEs solubility and extractability (>80-99%) and changed the dominant species of soluble Al, Cu, Pb and Zn to hydroxides and/or organo-metallic complexes, but increased slightly the extractable As and soluble Tl concentrations. Plants were able to grow only in amended tailings, and both species presented low levels of Al, As, Cd and Zn. Therefore, the use of combined red mud derivative and compost and halophytes was shown to be a good phytostabilisation strategy, although the dose applied must be carefully chosen in order to avoid possible solubilisation of As and Tl. Copyright © 2017 Elsevier Ltd. All rights reserved.
Phytoremediation of metal-contaminated soil in temperate humid regions of British Columbia, Canada.
Padmavathiamma, Prabha K; Li, Loretta Y
2009-08-01
The suitability of five plant species was studied for phytoextraction and phytostabilisation in a region with temperate maritime climate of coastal British Columbia, Canada. Pot experiments were conducted using Lolium perenne L (perennial rye grass), Festuca rubra L (creeping red fescue), Helianthus annuus L (sunflower), Poa pratensis L (Kentucky bluegrass) and Brassica napus L (rape) in soils treated with three different metal (Cu, Pb, Mn, and Zn) concentrations. The bio-metric characters of plants in soils with multiple-metal contaminations, their metal accumulation characteristics, translocation properties and metal removal were assessed at different stages of plant growth, 90 and 120 DAS (days after sowing). Lolium was found to be suitable for the phytostabilisation of Cu and Pb, Festuca for Mn and Poa for Zn. Metal removal was higher at 120 than at 90 days after sowing, and metals concentrated more in the underground tissues with less translocation to the aboveground parts. Bioconcentration factors indicate that Festuca had the highest accumulation for Cu, Helianthus for Pb and Zn and Poa for Mn.
Shutcha, Mylor Ngoy; Mubemba, Michel Mpundu; Faucon, Michel-Pierre; Luhembwe, Michel Ngongo; Visser, Marjolein; Colinet, Gilles; Meerts, Pierre
2010-08-01
This study evaluates the feasibility of using the grass species Rendlia altera, Monocymbium ceresiiforme, Cynodon dactylon, and amendments (compost and lime) for the phytostabilisation of soils contaminated by Cu in the province of Katanga (Democratic Republic of Congo). Species were grown on control and Cu-contaminated plots (artificially contaminated with 2,500 mg kg(-1) Cu) unamended (NA), amended with 4.5 kg compost m(-2) or 0.2 kg lime m(-2). R. altera was also grown on contaminated plots amended with 22.5 kg compost m(-2) or 1 kg lime m(-2). Plant survival, growth, and reproduction were monitored for two years. Cu-concentration in leaves of R. altera and M. ceresiiforme were analysed. pH and extractable Cu (0.01 M CaCl2) in soil were analysed in April 2007 and 2008. Results showed that R. altera seems to be the best candidate because of its highest survival on NA, followed by M. ceresiiforme, while liming was necessary to ensure survival of C. dactylon. Lime increased plant reproduction and reduced Cu accumulation in leaves compared to compost. However, higher survival and number of spikes of R. altera obtained in experiment 2 with 22.5 kg compost m(-2) suggest that lime x compost interactions should be investigated in further studies.
Poplar response to cadmium and lead soil contamination.
Radojčić Redovniković, Ivana; De Marco, Alessandra; Proietti, Chiara; Hanousek, Karla; Sedak, Marija; Bilandžić, Nina; Jakovljević, Tamara
2017-10-01
An outdoor pot experiment was designed to study the potential of poplar (Populus nigra 'Italica') in phytoremediation of cadmium (Cd) and lead (Pb). Poplar was treated with a combination of different concentrations of Cd (w = 10, 25, 50mgkg -1 soil) and Pb (400, 800, 1200mgkg -1 soil) and several physiological and biochemical parameters were monitored including the accumulation and distribution of metals in different plant parts (leaf, stem, root). Simultaneously, the changes in the antioxidant system in roots and leaves were monitored to be able to follow synergistic effects of both heavy metals. Moreover, a statistical analysis based on the Random Forests Analysis (RFA) was performed in order to determine the most important predictors affecting growth and antioxidative machinery activities of poplar under heavy metal stress. The study demonstrated that tested poplar could be a good candidate for phytoextraction processes of Cd in moderately contaminated soils, while in heavily contaminated soil it could be only considered as a phytostabilisator. For Pb remediation only phytostabilisation process could be considered. By using RFA we pointed out that it is important to conduct the experiments in an outdoor space and include environmental conditions in order to study more realistic changes of growth parameters and accumulation and distribution of heavy metals. Also, to be able to better understand the interactions among previously mentioned parameters, it is important to conduct the experiments during prolonged time exposure., This is especially important for the long life cycle woody species. Copyright © 2017. Published by Elsevier Inc.
Phytoremediation potential of wild plants growing on soil contaminated with heavy metals.
Čudić, Vladica; Stojiljković, Dragoslava; Jovović, Aleksandar
2016-09-01
Phytoremediation is an emerging technology that employs higher plants to cleanup contaminated environments, including metal-polluted soils. Because it produces a biomass rich in extracted toxic metals, further treatment of this biomass is necessary. The aim of our study was to assess the five-year potential of the following native wild plants to produce biomass and remove heavy metals from a polluted site: poplar (Populus ssp.), ailanthus (Ailanthus glandulosa L.), false acacia (Robinia pseudoacacia L.), ragweed (Artemisia artemisiifolia L.), and mullein (Verbascum thapsus L). Average soil contamination with Pb, Cd, Zn, Cu, Ni, Cr, and As in the root zone was 22,948.6 mg kg-1, 865.4 mg kg-1, 85,301.7 mg kg-1, 3,193.3 mg kg-1, 50.7 mg kg-1, 41.7 mg kg-1,and 617.9 mg kg-1, respectively. We measured moisture and ash content, concentrations of Pb, Cd, Zn, Cu, Ni, Cr, and As in the above-ground parts of the plants and in ash produced by combustion of the plants, plus gross calorific values. The plants' phytoextraction and phytostabilisation potential was evaluated based on their bioconcentration factor (BCF) and translocation factor (TF). Mullein was identified as a hyperaccumulator for Cd. It also showed a higher gross calorific value (19,735 kJ kg-1) than ragweed (16,469 kJ kg-1).The results of this study suggest that mullein has a great potential for phytoextraction and for biomass generation, and that ragweed could be an effective tool of phytostabilisation.
Biochar as possible long-term soil amendment for phytostabilisation of TE-contaminated soils.
Bopp, Charlotte; Christl, Iso; Schulin, Rainer; Evangelou, Michael W H
2016-09-01
Soils contaminated by trace elements (TEs) pose a high risk to their surrounding areas as TEs can spread by wind and water erosion or leaching. A possible option to reduce TE transfer from these sites is phytostabilisation. It is a long-term and cost-effective rehabilitation strategy which aims at immobilising TEs within the soil by vegetation cover and amendment application. One possible amendment is biochar. It is charred organic matter which has been shown to immobilise metals due to its high surface area and alkaline pH. Doubts have been expressed about the longevity of this immobilising effect as it could dissipate once the carbonates in the biochar have dissolved. Therefore, in a pot experiment, we determined plant metal uptake by ryegrass (Lolium perenne) from three TE-contaminated soils treated with two biochars, which differed only in their pH (acidic, 2.80; alkaline, 9.33) and carbonate (0.17 and 7.3 %) content. Root biomass was increased by the application of the alkaline biochar due to the decrease in TE toxicity. Zinc and Cu bioavailability and plant uptake were equally reduced by both biochars, showing that surface area plays an important role in metal immobilisation. Biochar could serve as a long-term amendment for TE immobilisation even after its alkalinity effect has dissipated.
Moreno-Jiménez, Eduardo; Esteban, Elvira; Carpena-Ruiz, Ramón O; Lobo, María Carmen; Peñalosa, Jesús M
2012-01-30
Phytoremediation can be a suitable option to manage derelict mine soils. A pot experiment was carried out under semi-controlled conditions with a mine-impacted soil. A further contamination event was mimicked by applying 5% of pyritic sludge. Four species were planted in pots (Myrtus communis, Retama sphaerocarpa, Rosmarinus officinalis and Tamarix gallica), and some pots remained unplanted as a control. The substrates were moderately to highly contaminated, mainly with arsenic and zinc. The strong acidification induced by the pyritic sludge was buffered with lime and plants survived in all the pots. Liming provoked an effective immobilisation of metals and arsenic. Plant establishment decreased labile As in the substrate by 50%, mainly M. communis, although the levels of extractable metals were not affected by the plants. R. sphaerocarpa and M. communis increased the levels of C and N in the soil by 23% and 34% respectively, and also enhanced enzymatic activities and microbial respiration to the double in some cases. The low transfer of trace elements to shoots limited the phytoextraction rate. Our results support the use of phytostabilisation in Mediterranean mine soils and show how plants of R. sphaerocarpa and M. communis may increase soil health and quality during revegetation. Copyright © 2011 Elsevier B.V. All rights reserved.
Selective chemical binding enhances cesium tolerance in plants through inhibition of cesium uptake
Adams, Eri; Chaban, Vitaly; Khandelia, Himanshu; Shin, Ryoung
2015-01-01
High concentrations of cesium (Cs+) inhibit plant growth but the detailed mechanisms of Cs+ uptake, transport and response in plants are not well known. In order to identify small molecules with a capacity to enhance plant tolerance to Cs+, chemical library screening was performed using Arabidopsis. Of 10,000 chemicals tested, five compounds were confirmed as Cs+ tolerance enhancers. Further investigation and quantum mechanical modelling revealed that one of these compounds reduced Cs+ concentrations in plants and that the imidazole moiety of this compound bound specifically to Cs+. Analysis of the analogous compounds indicated that the structure of the identified compound is important for the effect to be conferred. Taken together, Cs+ tolerance enhancer isolated here renders plants tolerant to Cs+ by inhibiting Cs+ entry into roots via specific binding to the ion thus, for instance, providing a basis for phytostabilisation of radiocesium-contaminated farmland. PMID:25740624
Selective chemical binding enhances cesium tolerance in plants through inhibition of cesium uptake.
Adams, Eri; Chaban, Vitaly; Khandelia, Himanshu; Shin, Ryoung
2015-03-05
High concentrations of cesium (Cs(+)) inhibit plant growth but the detailed mechanisms of Cs(+) uptake, transport and response in plants are not well known. In order to identify small molecules with a capacity to enhance plant tolerance to Cs(+), chemical library screening was performed using Arabidopsis. Of 10,000 chemicals tested, five compounds were confirmed as Cs(+) tolerance enhancers. Further investigation and quantum mechanical modelling revealed that one of these compounds reduced Cs(+) concentrations in plants and that the imidazole moiety of this compound bound specifically to Cs(+). Analysis of the analogous compounds indicated that the structure of the identified compound is important for the effect to be conferred. Taken together, Cs(+) tolerance enhancer isolated here renders plants tolerant to Cs(+) by inhibiting Cs(+) entry into roots via specific binding to the ion thus, for instance, providing a basis for phytostabilisation of radiocesium-contaminated farmland.
Bidar, Géraldine; Waterlot, Christophe; Verdin, Anthony; Proix, Nicolas; Courcot, Dominique; Détriché, Sébastien; Fourrier, Hervé; Richard, Antoine; Douay, Francis
2016-04-15
Aided phytostabilisation using trees and fly ashes is a promising technique which has shown its effectiveness in the management of highly metal-contaminated soils. However, this success is generally established based on topsoil physicochemical analysis and short-term experiments. This paper focuses on the long-term effects of the afforestation and two fly ashes (silico-aluminous and sulfo-calcic called FA1 and FA2, respectively) by assessing the integrity of fly ashes 10 years after their incorporation into the soil as well as the vertical distribution of the physicochemical parameters and trace elements (TEs) in the amended soils (F1 and F2) in comparison with a non-amended soil (R). Ten years after the soil treatment, the particle size distribution analysis between fly ashes and their corresponding masses (fly ash + soil particles) showed a loss or an agglomeration of finer particles. This evolution matches with the appearance of gypsum (CaSO4 2H2O) in FA2m instead of anhydrite (CaSO4), which is the major compound of FA2. This finding corresponds well with the dissolution and the lixiviation of Ca, S and P included in FA2 along the F2 soil profile, generating an accumulation of these elements at 30 cm depth. However, no variation of TE contamination was found between 0 and 25 cm depth in F2 soil except for Cd. Conversely, Cd, Pb, Zn and Hg enrichment was observed at 25 cm depth in the F1 soil, whereas no enrichment was observed for As. The fly ashes studied, and notably FA2, were able to reduce Cd, Pb and Zn availability in soil and this capacity persists over the time despite their structural and chemical changes. Copyright © 2016 Elsevier Ltd. All rights reserved.
González-Alcaraz, María Nazaret; Conesa, Héctor Miguel; Tercero, María del Carmen; Schulin, Rainer; Alvarez-Rogel, José; Egea, Consuelo
2011-02-15
The aim of this study was to evaluate the combined effects of liming and behaviour of Sarcocornia fruticosa as a strategy of phytomanagement of metal polluted salt marsh soils. Soils were taken from two polluted salt marshes (one with fine texture and pH∼6.4 and the other one with sandy texture and pH∼3.1). A lime amendment derived from the marble industry was added to each soil at a rate of 20 g kg(-1), giving four treatments: neutral soil with/without liming and acidic soil with/without liming. Cuttings of S. fruticosa were planted in pots filled with these substrates and grown for 10 months. The pots were irrigated with eutrophicated water. As expected, lime amendment decreased the soluble metal concentrations. In both soils, liming favoured the growth of S. fruticosa and enhanced the capacity of the plants to phytostabilise metals in roots. Copyright © 2010 Elsevier B.V. All rights reserved.
Bleeker, P M; Teiga, P M; Santos, M H; de Koe, T; Verkleij, J A C
2003-01-01
Phytostabilisation of bare heavily contaminated substrate, such as abandoned mine sites, is considered a very appropriate technology in order to diminish erosion and dispersion of contaminants into the surroundings. In this short-term pot study, application of industrial sugar residue (ISR), a waste product of the sugar industry, proved to ameliorate spoils conditions for plant performance by elevating pH and immobilising several metals. Although arsenate concentrations were positively correlated to spoil pH and spoil treatment with ISR mobilised As, growth of both Phaseolus vulgaris and Holcus lanatus improved significantly after applications of 3.75 g ISR kg(-1) dry spoil. Nutrient uptake from the substrate, with the exception of potassium, was elevated by ISR. As a remediation technique ISR application could be effective although in As-contaminated sites application might be restricted to areas where leaching to (ground) water does not form a risk.
Phytoremediation trials on metal- and arsenic-contaminated pyrite wastes (Torviscosa, Italy).
Vamerali, Teofilo; Bandiera, Marianna; Coletto, Lucia; Zanetti, Federica; Dickinson, Nicholas M; Mosca, Giuliano
2009-03-01
At a site in Udine, Italy, a 0.7m layer of As, Co, Cu, Pb and Zn contaminated wastes derived from mineral roasting for sulphur extraction had been covered with an unpolluted 0.15m layer of gravelly soil. This study investigates whether woody biomass phytoremediation is a realistic management option. Comparing ploughing and subsoiling (0.35m depth), the growth of Populus and Salix and trace element uptake were investigated in both pot and field trials. Species differences were marginal and species selection was not critical. Impaired above-ground productivity and low translocation of trace elements showed that bioavailable contaminant stripping was not feasible. The most significant finding was of coarse and fine roots proliferation in surface layers that provided a significant sink for trace elements. We conclude that phytostabilisation and effective immobilisation of metals and As could be achieved at the site by soil amelioration combined with woody species establishment. Confidence to achieve a long-term and sustainable remediation requires a more complete quantification of root dynamics and a better understanding of rhizosphere processes.
Patra, Deepak Kumar; Pradhan, Chinmay; Patra, Hemanta Kumar
2018-02-01
Chromium (Cr) contamination in soil is a growing concern in sustainable agricultural production and food safety. Remediation of Cr from contaminated soils is a challenging task which may not only help in sustaining agriculture but also in minimizing adverse environmental impacts. Pot culture experiments were performed with the application of varied concentration of Cr +6 to assess the Chromium accumulation potential of Lemongrass and to study the impact of toxic concentration of Cr +6 on morphological, physiological and biochemical parameters of the plant. The results showed an increasing accumulation trend of Chromium with increasing Chromium concentrations in both root and shoot of 60 days old Lemongrass plants, while the protein and chlorophyll contents decreased. Similarly, accumulation of Cr increased the levels of proline and antioxidant enzymes indicating the enhanced damage control activity. The potentiality of the plant with the capacity to accumulate and stabilize Cr compound in Cr contaminated soil by phytoremediation process has been explored in the present investigation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wilson, Susan C; Leech, Calvin D; Butler, Leo; Lisle, Leanne; Ashley, Paul M; Lockwood, Peter V
2013-10-15
The effects of nutrient and lime additions on antimony (Sb) and arsenic (As) accumulation by native Australian and naturalised plants growing in two contaminated mine site soils (2,735 mg kg(-1) and 4,517 mg kg(-1) Sb; 826 mg kg(-1) and 1606 As mgkg(-1)) was investigated using a glasshouse pot experiment. The results indicated an increase in soil solution concentrations with nutrient addition in both soils and also with nutrient+lime addition for Sb in one soil. Metalloid concentrations in plant roots were significantly greater than concentrations in above ground plant parts. The metalloid transfer to above ground plant parts from the roots and from the soil was, however, low (ratio of leaf concentration/soil concentration≪1) for all species studied. Eucalyptus michaeliana was the most successful at colonisation with lowest metalloid transfer to above ground plant parts. Addition of nutrients and nutrients+lime to soils, in general, increased plant metalloid accumulation. Relative As accumulation was greater than that of Sb. All the plant species studied were suitable for consideration in the mine soil phytostabilisation strategies but lime additions should be limited and longer term trials also recommended. Copyright © 2013 Elsevier B.V. All rights reserved.
The Chemophytostabilisation Process of Heavy Metal Polluted Soil
Grobelak, Anna; Napora, Anna
2015-01-01
Industrial areas are characterised by soil degradation processes that are related primarily to the deposition of heavy metals. Areas contaminated with metals are a serious source of risk due to secondary pollutant emissions and metal leaching and migration in the soil profile and into the groundwater. Consequently, the optimal solution for these areas is to apply methods of remediation that create conditions for the restoration of plant cover and ensure the protection of groundwater against pollution. Remediation activities that are applied to large-scale areas contaminated with heavy metals should mainly focus on decreasing the degree of metal mobility in the soil profile and metal bioavailability to levels that are not phytotoxic. Chemophytostabilisation is a process in which soil amendments and plants are used to immobilise metals. The main objective of this research was to investigate the effects of different doses of organic amendments (after aerobic sewage sludge digestion in the food industry) and inorganic amendments (lime, superphosphate, and potassium phosphate) on changes in the metals fractions in soils contaminated with Cd, Pb and Zn during phytostabilisation. In this study, the contaminated soil was amended with sewage sludge and inorganic amendments and seeded with grass (tall fescue) to increase the degree of immobilisation of the studied metals. The contaminated soil was collected from the area surrounding a zinc smelter in the Silesia region of Poland (pH 5.5, Cd 12 mg kg-1, Pb 1100 mg kg-1, Zn 700 mg kg-1). A plant growth experiment was conducted in a growth chamber for 5 months. Before and after plant growth, soil subsamples were subjected to chemical and physical analyses. To determine the fractions of the elements, a sequential extraction method was used according to Zeien and Brümmer. Research confirmed that the most important impacts on the Zn, Cd and Pb fractions included the combined application of sewage sludge from the food industry and the addition of lime and potassium phosphate. Certain doses of inorganic additives decreased the easily exchangeable fraction from 50% to 1%. The addition of sewage sludge caused a decrease in fraction I for Cd and Pb. In combination with the use of inorganic additives, a mobile fraction was not detected and an easily mobilisable fraction was reduced by half. For certain combinations of metals, the concentrations were detected up to a few percent. The application of sewage sludge resulted in a slight decrease in a mobile (water soluble and easily exchangeable metals) fraction of Zn, but when inorganic additives were applied, this fraction was not detected. The highest degree of immobilisation of the tested heavy metals relative to the control was achieved when using both sewage sludge and inorganic additives at an experimentally determined dose. The sequential extraction results confirmed this result. In addition, the results proved that the use of the phytostabilisation process on contaminated soils should be supported. PMID:26115341
The Chemophytostabilisation Process of Heavy Metal Polluted Soil.
Grobelak, Anna; Napora, Anna
2015-01-01
Industrial areas are characterised by soil degradation processes that are related primarily to the deposition of heavy metals. Areas contaminated with metals are a serious source of risk due to secondary pollutant emissions and metal leaching and migration in the soil profile and into the groundwater. Consequently, the optimal solution for these areas is to apply methods of remediation that create conditions for the restoration of plant cover and ensure the protection of groundwater against pollution. Remediation activities that are applied to large-scale areas contaminated with heavy metals should mainly focus on decreasing the degree of metal mobility in the soil profile and metal bioavailability to levels that are not phytotoxic. Chemophytostabilisation is a process in which soil amendments and plants are used to immobilise metals. The main objective of this research was to investigate the effects of different doses of organic amendments (after aerobic sewage sludge digestion in the food industry) and inorganic amendments (lime, superphosphate, and potassium phosphate) on changes in the metals fractions in soils contaminated with Cd, Pb and Zn during phytostabilisation. In this study, the contaminated soil was amended with sewage sludge and inorganic amendments and seeded with grass (tall fescue) to increase the degree of immobilisation of the studied metals. The contaminated soil was collected from the area surrounding a zinc smelter in the Silesia region of Poland (pH 5.5, Cd 12 mg kg-1, Pb 1100 mg kg-1, Zn 700 mg kg-1). A plant growth experiment was conducted in a growth chamber for 5 months. Before and after plant growth, soil subsamples were subjected to chemical and physical analyses. To determine the fractions of the elements, a sequential extraction method was used according to Zeien and Brümmer. Research confirmed that the most important impacts on the Zn, Cd and Pb fractions included the combined application of sewage sludge from the food industry and the addition of lime and potassium phosphate. Certain doses of inorganic additives decreased the easily exchangeable fraction from 50% to 1%. The addition of sewage sludge caused a decrease in fraction I for Cd and Pb. In combination with the use of inorganic additives, a mobile fraction was not detected and an easily mobilisable fraction was reduced by half. For certain combinations of metals, the concentrations were detected up to a few percent. The application of sewage sludge resulted in a slight decrease in a mobile (water soluble and easily exchangeable metals) fraction of Zn, but when inorganic additives were applied, this fraction was not detected. The highest degree of immobilisation of the tested heavy metals relative to the control was achieved when using both sewage sludge and inorganic additives at an experimentally determined dose. The sequential extraction results confirmed this result. In addition, the results proved that the use of the phytostabilisation process on contaminated soils should be supported.
Selamat, S Norleela; Abdullah, S Rozaimah Sheikh; Idris, M
2014-01-01
This study was conducted to investigate the uptake of lead (Pb) and arsenic (As) from contaminated soil using Melastoma malabathricum L. species. The cultivated plants were exposed to As and Pb in separate soils for an observation period of 70 days. From the results of the analysis, M. malabathricum accumulated relatively high range of As concentration in its roots, up to a maximum of 2800 mg/kg. The highest accumulation of As in stems and leaves was 570 mg/kg of plant. For Pb treatment, the highest concentration (13,800 mg/kg) was accumulated in the roots of plants. The maximum accumulation in stems was 880 mg/kg while maximum accumulation in leaves was 2,200 mg/kg. Only small amounts of Pb were translocated from roots to above ground plant parts (TF < 1). However, a wider range of TF values (0.01-23) for As treated plants proved that the translocation of As from root to above ground parts was greater. However, the high capacity of roots to take up Pb and As (BF > 1) is indicative this plants is a good bioaccumulator for these metals. Therefore, phytostabilisation is the mechanism at work in M. malabathricum's uptake of Pb, while phytoextraction is the dominant mechanism with As.
Zribi, Kais; Nouairi, Issam; Slama, Ines; Talbi-Zribi, Ons; Mhadhbi, Haythem
2015-01-01
In this study we investigated effects of Zn supply on germination, growth, inorganic solutes (Zn, Ca, Fe, and Mg) partitioning and nodulation of Medicago sativa This plant was cultivated with and without Zn (2 mM). Treatments were plants without (control) and with Zn tolerant strain (S532), Zn intolerant strain (S112) and 2 mM urea nitrogen fertilisation. Results showed that M. sativa germinates at rates of 50% at 2 mM Zn. For plants given nitrogen fertilisation, Zn increased plant biomass production. When grown with symbionts, Zn supply had no effect on nodulation. Moreover, plants with S112 showed a decrease of shoot and roots biomasses. However, in symbiosis with S532, an increase of roots biomass was observed. Plants in symbiosis with S. meliloti accumulated more Zn in their roots than nitrogen fertilised plants. Zn supply results in an increase of Ca concentration in roots of fertilised nitrogen plants. However, under Zn supply, Fe concentration decreased in roots and increased in nodules of plants with S112. Zn supply showed contrasting effects on Mg concentrations for plants with nitrogen fertilisation (increase) and plants with S112 (decrease). The capacity of M. sativa to accumulate Zn in their nodulated roots encouraged its use in phytostabilisation processes.
Andráš, Peter; Matos, João Xavier; Turisová, Ingrid; Batista, Maria João; Kanianska, Radoslava; Kharbish, Sherif
2018-05-11
São Domingos belongs among the most important historic Iberian Pyrite Belt Cu mines. The anthrosoil is contaminated by a very high content of heavy metals and metalloids. The study was focused on evaluating the interaction of some chemical elements (Ca, Mg, Fe, Mn, Cu, Pb, Zn, Ag, Cd, Ni, Co, As, Sb) in the system soil vs. five autochthonous dominant plant species: Pinus pinaster Aiton, Quercus rotundifolia Lam., Agrostis sp., Juncus conglomeratus L. and Juncus effusus L. The plants are heavily contaminated by Cu, Pb, As and Zn. The bioconcentration factor proved that they exhibit features of metal tolerant excluders. The trees are accumulators of Ag, whereas the graminoids are hyper-accumulators of Ag and Juncus effusus of Co. The translocation factor confirmed that the selected elements are immobilised in the roots except for Mn and Zn in Pinus pinaster and Mn in Quercus rotundifolia and Juncus conglomeratus. The bioaccumulation of Mn, Zn and Cu at low pH increases. The increased content of Ca and Mg in the soil inhibits, in the case of some metals and metalloids, their intake to plants. Although the studied plants, despite their fitness and vitality at the contaminated sites, are not suitable for phytoextraction (except Co and Ag), they can be used for phytostabilisation at the mining habitats.
Schwitzguébel, Jean-Paul; Comino, Elena; Plata, Nadia; Khalvati, Mohammadali
2011-07-01
Phytoremediation does exploit natural plant physiological processes and can be used to decontaminate agricultural soils, industrial sites, brownfields, sediments and water containing inorganic and organic pollutants or to improve food chain safety by phytostabilisation of toxic elements. It is a low-cost and environment friendly technology targetting removal, degradation or immobilisation of contaminants. The aim of the present review is to highlight some recent advances in phytoremediation in the Alpine context. Case studies are presented where phytoremediation has been or can be successfully applied in Alpine areas to: (1) clean-up industrial wastewater containing sulphonated aromatic xenobiotics released by dye and textile industries; (2) remediate agricultural soils polluted by petroleum hydrocarbons; (3) improve food chain safety in soils contaminated with toxic trace elements (As, Co, Cr and Pb); and (4) treat soils impacted by modern agricultural activities with a special emphasis on phosphate fertilisation. Worlwide, including in Alpine areas, the controlled use of appropriate plants is destined to play a major role for remediation and restoration of polluted and degraded ecosystems, monitoring and assessment of environmental quality, prevention of landscape degradation and immobilisation of trace elements. Phytotechnologies do already offer promising approaches towards environmental remediation, human health, food safety and sustainable development for the 21st century in Alpine areas and elsewhere all over the world.
Bioextracts of Cistus ladanifer L. growing in São Domingos mine as source of valuable compounds
NASA Astrophysics Data System (ADS)
Santos, Erika; Balseiro-Romero, Maria; Abreu, Maria Manuela; Macías, Felipe
2016-04-01
The rehabilitation of abandoned mines is essential and a priority because these areas are sources of contamination and environmental and health risk. The rehabilitation of mining areas by phytostabilisation involves several ecological improvements but, nowadays, the economical approaches are also essential. Some autochthones plant species with refereed aromatic and medicinal properties are able to naturally colonize contaminated soils from mining areas contributing to their natural rehabilitation. A study was carried out in order to characterize and valorise autochthones species, which are used in rehabilitation processes of mining areas, as new sources of bioactive substances. The main aims of this study were to: i) characterise the phytochemical profile of the bioextracts from shoots of Cistus ladanifer growing in soils from São Domingos mining area and a control area; and ii) evaluate the influence of potentially hazardous elements (PHEs) accumulated in the shoots on the quality of the bioextracts. Composite samples of soils, developed on mine wastes and/or host rocks, as well as C. ladanifer shoots were collected in São Domingos mine (Iberian pyrite Belt, SE of Portugal) and in a reference area with non-contaminated soils and the same climatic conditions (Corte do Pinto). Classical characterisation of soils and total concentrations of PHEs (Al, As, Cr, Cu, Mn, Sb and Zn) in soils and plant shoots were determined. The bioextracts from C. ladanifer shoots were obtained by an accelerated solvent extractor, and the compounds were analysed by GC-MS. Extracts were extracted with hexane and major components were quantified. The total concentrations of As, Cu, Pb, Sb and Zn were higher in soils from São Domingos than from those of Corte do Pinto, while the opposite was obtained for Al, Cr and Mn. However, soils from São Domingos can only be considered contaminated with As, Cu, Pb and Sb. The concentrations of PHEs in plant shoots from São Domingos presented intrapopulation variability being these concentrations, in general, higher than in plant shoots from non-contaminated area (except Cr). Cistus ladanifer bioextracts from the two populations exhibited similar profiles and quite variation in the qualitative composition. Major component in bioextracts obtained from C. ladanifer shoots of the two populations was viridiflorol (8.9-12.8 %). Other compounds were identified in all bioextracts (<2.5 %, independently of the plant sample) with great odoriferous interest (amber-like scent: ambrox and caryophyllene oxide; fresh-camphoraceous: bornyl acetate, borneol and myrtenol) and microbial effect (alfa-pinene, beta-pinene, fenchone and camphor). Slight variability was observed in the concentrations of major components (mg/kg - alfa-pinene: 107.8-163.7; camphene: 20.8-53.3; camphor: 14.5-70.2; fechone: 4.2-20.7; verbenone: 131.8-232.0, depending on sample), but no relationship was found between these components and the concentrations of PHEs in shoots. Therefore, contamination in soils from São Domingos and PHEs in C. ladanifer shoots did not affect the quality of the bioextracts. Bioxtracts obtained from C. ladanifer growing in São Domingos mining area had valuable compounds. Phytostabilisation of mining areas from IPB with this species can provide economic return by the exploration of this plant-based product for fragrance and pharmaceutical industries.
Klink, Agnieszka
2017-02-01
The aims of the present investigation were to reveal various trace metal accumulation abilities of two common helophytes Typha latifolia and Phragmites australis and to investigate their potential use in the phytoremediation of environmental metal pollution. The concentrations of Fe, Mn, Zn, Cu, Cd, Pb and Ni were determined in roots, rhizomes, stems and leaves of both species studied as well as in corresponding water and bottom sediments from 19 sites selected within seven lakes in western Poland (Leszczyńskie Lakeland). The principal component and classification analysis showed that P. australis leaves were correlated with the highest Mn, Fe and Cd concentrations, but T. latifolia leaves with the highest Pb, Zn and Cu concentrations. However, roots of the P. australis were correlated with the highest Mn, Fe and Cu concentrations, while T. latifolia roots had the highest Pb, Zn and Cd concentrations. Despite the differences in trace metal accumulation ability between the species studied, Fe, Cu, Zn, Pb and Ni concentrations in the P. australis and T. latifolia exhibited the following accumulation scheme: roots > rhizomes > leaves > stems, while Mn decreased in the following order: root > leaf > rhizome > stem. The high values of bioaccumulation factors and low values of translocation factors for Zn, Mn, Pb and Cu indicated the potential application of T. latifolia and P. australis in the phytostabilisation of contaminated aquatic ecosystems. Due to high biomass of aboveground organs of both species, the amount of trace metals stored in these organs during the vegetation period was considerably high, despite of the small trace metals transport.
NASA Astrophysics Data System (ADS)
Santos, Erika S.; Balseiro-Romero, Maria; Abreu, Maria Manuela; Macías, Felipe
2017-04-01
The rehabilitation of mining areas with sulfide materials, both abandoned and active mines, is a priority because these areas are sources of acid mine drainage and multielemental contamination and, consequently, environmental and health risk. The combined use of Technosols and Phytostabilisation accelerates the area recovery, and ensures the sustainability at long-term of the physical, chemical and biological processes involved in the rehabilitation due to the functional complementarity of the components. Nowadays the rehabilitation strategy of contaminated areas must be based on circular economy, environmental improvements and economic approaches. Cistus ladanifer L. is an autochthones and spontaneous species that contributes to natural rehabilitation of contaminated soils from mining areas. Moreover, bioextracts obtained from C. ladanifer growing in São Domingos mining area (Iberian Pyrite Belt) presented several valuable compounds, which can provide an economic return by their use for fragrance and pharmaceutical approaches. This study aimed to evaluate, under controlled conditions, the efficiency of an integrated system for the rehabilitation of sulfide-rich and gossan tailings, which combines the application of Technosols and Phytostabilisation, and exploitation of added-value compounds from C. ladanifer bioextracts. The rehabilitation system comprised a surface layer of Technosol and a barrier of alkaline residues (biomass ashes and limestone wastes) that covered sulfide-rich wastes. Two Tecnosols composed of gossan wastes and different mixtures of agro-industrial wastes (from distilleries and greenhouse agriculture without any valorisation) at 150 Mg/ha were tested. In the Technosols was seeded C. ladanifer. After three years of plant growth, shoots biomass was quantified and used to obtain bioextracts (extraction with n-hexane). The organic composition of the bioextracts was determined and some compounds with added value (α-pinene, camphene, camphor, fenchona and verbenone) were quantified. During the assay, the Technosols presented better structure, pH (5,7‒6,0) and concentrations of organic C (9,0‒26,2 g/kg) and NPK (204‒490 mg Ntotal/kg, 163‒329 mg Pextractable/kg, 80‒308 mg Kextractable/kg), compared to control (only gossan wastes; pH: 3.7‒4.0; [Corg]: 2.2‒5.2 g / kg; Ntotal]: 126-341 mg / kg; [Pext]: 0.2‒0.9 mg/kg; [Kext]: ≈ 20 mg/kg). In Technosols, concentrations of nutrients (Ca, Fe, K, Mg, Mn and Zn) in available fraction (Rhizzo extraction) were also higher (>50-fold compared to control) but Cu and Pb concentrations decrease. The improvement of these characteristics in the Technosols stimulated the germination (control: 1 %; Technosols: 5‒11 %) and plant growth. After 40 days, seedlings from control died but the Technosols support the vegetative growth at long-term. The shoots biomass obtained was between 67.9 and 76.4 g of fresh weight, corresponding to 5.8 and 6.7 ton of dry weight/ha. The increase of evapotranspiration by C. ladanifer growth and the alkalinizing barrier decreased the sulfide oxidation of the sulfide-rich wastes and, consequently, the generation of acid drainage and dispersion of potentially hazardous elements by leachates. Several compounds with economic interest were quantified, being benzenepropanoic acid the major compound (15‒42 %). Verbenone showed significant concentrations in bioextracts (≈7 mg/kg). The integrated rehabilitation system was adequate and sustainable, contributing to the recovery of unproductive and contaminated areas, which can be economically exploited.
Vamerali, Teofilo; Bandiera, Marianna; Mosca, Giuliano
2011-05-01
Sunflower, alfalfa, fodder radish and Italian ryegrass were cultivated in severely As-Cd-Co-Cu-Pb-Zn-contaminated pyrite waste discharged in the past and capped with 0.15m of unpolluted soil at Torviscosa (Italy). Plant growth and trace element uptake were compared under ploughing and subsoiling tillages (0.3m depth), the former yielding higher contamination (∼30%) in top soil. Tillage choice was not critical for phytoextraction, but subsoiling enhanced above-ground productivity, whereas ploughing increased trace element concentrations in plants. Fodder radish and sunflower had the greatest aerial biomass, and fodder radish the best trace element uptake, perhaps due to its lower root sensitivity to pollution. Above-ground removals were generally poor (maximum of 33mgm(-2) of various trace elements), with Zn (62%) and Cu (18%) as main harvested contaminants. The most significant finding was of fine roots proliferation in shallow layers that represented a huge sink for trace element phytostabilisation. It is concluded that phytoextraction is generally far from being an efficient management option in pyrite waste. Sustainable remediation requires significant improvements of the vegetation cover to stabilise the site mechanically and chemically, and provide precise quantification of root turnover. Copyright © 2011 Elsevier Ltd. All rights reserved.
Touceda-González, M; Prieto-Fernández, Á; Renella, G; Giagnoni, L; Sessitsch, A; Brader, G; Kumpiene, J; Dimitriou, I; Eriksson, J; Friesl-Hanl, W; Galazka, R; Janssen, J; Mench, M; Müller, I; Neu, S; Puschenreiter, M; Siebielec, G; Vangronsveld, J; Kidd, P S
2017-12-01
Gentle remediation options (GRO) are based on the combined use of plants, associated microorganisms and soil amendments, which can potentially restore soil functions and quality. We studied the effects of three GRO (aided-phytostabilisation, in situ stabilisation and phytoexclusion, and aided-phytoextraction) on the soil microbial biomass and respiration, the activities of hydrolase enzymes involved in the biogeochemical cycles of C, N, P, and S, and bacterial community structure of trace element contaminated soils (TECS) from six field trials across Europe. Community structure was studied using denaturing gradient gel electrophoresis (DGGE) fingerprinting of Bacteria, α- and β-Proteobacteria, Actinobacteria and Streptomycetaceae, and sequencing of DGGE bands characteristic of specific treatments. The number of copies of genes involved in ammonia oxidation and denitrification were determined by qPCR. Phytomanagement increased soil microbial biomass at three sites and respiration at the Biogeco site (France). Enzyme activities were consistently higher in treated soils compared to untreated soils at the Biogeco site. At this site, microbial biomass increased from 696 to 2352 mg ATP kg -1 soil, respiration increased from 7.4 to 40.1 mg C-CO 2 kg -1 soil d -1 , and enzyme activities were 2-11-fold higher in treated soils compared to untreated soil. Phytomanagement induced shifts in the bacterial community structure at both, the total community and functional group levels, and generally increased the number of copies of genes involved in the N cycle (nirK, nirS, nosZ, and amoA). The influence of the main soil physico-chemical properties and trace element availability were assessed and eventual site-specific effects elucidated. Overall, our results demonstrate that phytomanagement of TECS influences soil biological activity in the long term. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lum, A Fontem; Ngwa, E S A; Chikoye, D; Suh, C E
2014-01-01
Phytoremediation is a promising option for reclaiming soils contaminated with toxic metals, using plants with high potentials for extraction, stabilization and hyperaccumulation. This study was conducted in Cameroon, at the Bassa Industrial Zone of Douala in 2011, to assess the total content of 19 heavy metals and 5 other elements in soils and phytoremediation potential of 12 weeds. Partial extraction was carried out in soil, plant root and shoot samples. Phytoremediation potential was evaluated in terms of the Biological Concentration Factor, Translocation Factor and Biological Accumulation Coefficient. The detectable content of the heavy metals in soils was Cu:70-179, Pb:8-130, Zn:200-971, Ni:74-296, Co:31-90, Mn:1983-4139, V:165-383, Cr:42-1054, Ba:26-239, Sc:21-56, Al:6.11-9.84, Th:7-22, Sr:30-190, La:52-115, Zr:111-341, Y:10-49, Nb:90-172 in mg kg(-1), and Ti:2.73-4.09 and Fe:12-16.24 in wt%. The contamination index revealed that the soils were slightly to heavily contaminated while the geoaccumulation index showed that the soils ranged from unpolluted to highly polluted. The concentration of heavy metals was ranked as Zn > Ni > Cu > V > Mn > Sc > Co > Pb and Cr in the roots and Mn > Zn > Ni > Cu > Sc > Co > V > Pb > Cr > Fe in the shoots. Dissotis rotundifolia and Kyllinga erecta had phytoextraction potentials for Pb and Paspalum orbicularefor Fe. Eleusine indica and K. erecta had phytostabilisation potential for soils contaminated with Cu and Pb, respectively.
Pandey, Janhvi; Chand, Sukhmal; Pandey, Shipra; Rajkumari; Patra, D D
2015-12-01
A field experiment using tannery sludge as a soil amendment material and palmarosa (Cymbopogon martinii) as a potential phytostabilizer was conducted to investigate their synergistic effect in relation to the improvement in soil quality/property. Three consecutive harvests of two cultivars of palmarosa-PRC-1 and Trishna, were examined to find out the influence of different tannery sludge doses on their herb, dry matter, essential oil yield and heavy metal accumulation. Soil fertility parameters (N, P, K, Organic carbon) were markedly affected by different doses of sludge. Enhanced soil nitrogen was positively correlated with herb yield (0.719*) and plant height (0.797*). The highest dose of tannery sludge (100 t ha(-1)) exhibited best performance than other treatments with respect to herb, dry matter and oil yield in all three harvests. Trishna was found to be superior to PRC-1 in relation to same studied traits. Quality of oil varied, but was insignificant statistically. Uptake of heavy metals followed same order (Cr>Ni>Pb>Cd) in roots and shoots. Translocation factor <1 for all trace elements and Bioconcentration factor >1 was observed in case of all heavy metals. Overall, tannery sludge enhanced the productivity of crop and metal accumulation occurred in roots with a meager translocation to shoots, hence it can be used as a phytostabiliser. The major advantage of taking palmarosa in metal polluted soil is that unlike food and agricultural crops, the product (essential oil) is extracted by hydro-distillation and there is no chance of oil contamination, thus is commercially acceptable. Copyright © 2015 Elsevier Inc. All rights reserved.
Pardo, Tania; Clemente, Rafael; Bernal, M Pilar
2011-07-01
The use of organic wastes as amendments in heavy metal-polluted soils is an ecological integrated option for their recycling. The potential use of alperujo (solid olive-mill waste) compost and pig slurry in phytoremediation strategies has been studied, evaluating their short-term effects on soil health. An aerobic incubation experiment was carried out using an acid mine spoil based soil and a low OM soil from the mining area of La Unión (Murcia, Spain). Arsenic and heavy metal solubility in amended and non-amended soils, and microbial parameters were evaluated and related to a phytotoxicity test. The organic amendments provoked an enlargement of the microbial community (compost increased biomass-C from non detected values to 35 μg g(-1) in the mine spoil soil, and doubled control values in the low OM soil) and an intensification of its activity (including a twofold increase in nitrification), and significantly enhanced seed germination (increased cress germination by 25% in the mine spoil soil). Organic amendments increased Zn and Pb EDTA-extractable concentrations, and raised As solubility due to the influence of factors such as pH changes, phosphate concentration, and the nature of the organic matter of the amendments. Compost, thanks to the greater persistence of its organic matter in soil, could be recommended for its use in (phyto)stabilisation strategies. However, pig slurry boosted inorganic N content and did not significantly enhance As extractability in soil, so its use could be specifically recommended in As polluted soils. Copyright © 2011 Elsevier Ltd. All rights reserved.
Toxic metal tolerance in native plant species grown in a vanadium mining area.
Aihemaiti, Aikelaimu; Jiang, Jianguo; Li, De'an; Li, Tianran; Zhang, Wenjie; Ding, Xutong
2017-12-01
Vanadium (V) has been extensively mined in China and caused soil pollution in mining area. It has toxic effects on plants, animals and humans, posing potential health risks to communities that farm and graze cattle adjacent to the mining area. To evaluate in situ phytoremediation potentials of native plants, V, chromium, copper and zinc concentrations in roots and shoots were measured and the bioaccumulation (BAF) and translocation (TF) efficiencies were calculated. The results showed that Setaria viridis accumulated greater than 1000 mg kg -1 V in its shoots and exhibited TF > 1 for V, Cr, Zn and BAF > 1 for Cu. The V accumulation amount in the roots of Kochia scoparia also surpassed 1000 mg kg -1 and showed TF > 1 for Zn. Chenopodium album had BAF > 1 for V and Zn and Daucus carota showed TF > 1 for Cu. Eleusine indica presented strong tolerance and high metal accumulations. S. viridis is practical for in situ phytoextractions of V, Cr and Zn and phytostabilisation of Cu in V mining area. Other species had low potential use as phytoremediation plant at multi-metal polluted sites, but showed relatively strong resistance to V, Cr, Cu and Zn toxicity, can be used to vegetate the contaminated soils and stabilise toxic metals in V mining area.
El Aafi, N; Brhada, F; Dary, M; Maltouf, A Filali; Pajuelo, E
2012-03-01
The aim of this work was to test Lupinus luteus plants, inoculated with metal resistant rhizobacteria, in order to phytostabilise metals in contaminated soils. The resistance to heavy metals of strains isolated from nodules of Lupinus plants was evaluated. The strain MSMC541 showed multi-resistance to several metals (up to 13.3 mM As, 2.2 mM Cd, 2.3 mM Cu, 9 mM Pb and 30 mM Zn), and it was selected for further characterization. Furthermore, this strain was able to biosorb great amounts of metals in cell biomass. 16S rDNA sequencing positioned this strain within the genus Serratia. The presence of arsenic resistance genes was confirmed by southern blot and PCR amplification. A rhizoremediation pot experiment was conducted using Lupinus luteus grown on sand supplemented with heavy metals and inoculated with MSMC541. Plant growth parameters and metal accumulation were determined in inoculated vs. non-inoculated Lupinus luteus plants. The results showed that inoculation with MSMC541 improved the plant tolerance to metals. At the same time, metal translocation to the shoot was significantly reduced upon inoculation. These results suggest that Lupinus luteus plants, inoculated with the metal resistant strain Serratia sp. MSMC541, have a great potential for phytostabilization of metal contaminated soils.
Clemente, Rafael; Walker, David J; Pardo, Tania; Martínez-Fernández, Domingo; Bernal, M Pilar
2012-07-15
The halophytic shrub Atriplex halimus L. was used in a field phytoremediation experiment in a semi-arid area highly contaminated by trace elements (As, Cd, Cu, Mn, Pb and Zn) within the Sierra Minera of La Unión-Cartagena (SE Spain). The effects of compost and pig slurry on soil conditions and plant growth were determined. The amendments (particularly compost) only slightly affected trace element concentrations in soil pore water or their availability to the plants, increased soil nutrient and organic matter levels and favoured the development of a sustainable soil microbial biomass (effects that were enhanced by the presence of A. halimus) as well as, especially for slurry, increasing A. halimus biomass and ground cover. With regard to the minimisation of trace elements concentrations in the above-ground plant parts, the effectiveness of both amendments was greatest 12-16 months after their incorporation. The findings demonstrate the potential of A. halimus, particularly in combination with an organic amendment, for the challenging task of the phytostabilisation of contaminated soils in (semi-)arid areas and suggest the need for an ecotoxicological evaluation of the remediated soils. However, the ability of A. halimus to accumulate Zn and Cd in the shoot may limit its use to moderately-contaminated sites. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chami, Ziad Al; Amer, Nasser; Bitar, Lina Al; Mondelli, Donato; Dumontet, Stefano
2013-04-01
The success of phytoremediation depends upon the identification of suitable plants species that hyperaccumulate/tolerate heavy metals and produce large amounts of biomass. In this study, three endemic Mediterranean plant species Atriplex halimus, Medicago lupulina and Portulaca oleracea, were grown hydroponically to assess their potential use in phytoremediation of Ni, Pb and Zn and biomass production. The objective of this research is to improve phytoremediation procedures by searching for a new endemic Mediterranean plant species which can be used for phytoremediation of low/moderate contamination in the Mediterranean arid and semiarid conditions and bioenergy production. The hydroponics experiment was carried out in a growth chamber using half strength Hoagland's solution as control (CTR) and 5 concentrations for Pb and Zn (5, 10, 25, 50 and 100 mg L-1) and 3 concentrations for Ni (1, 2, and 5 mg L-1). Complete randomized design with five replications was adopted. Main growth parameters (shoot and root dry weight, shoot and root length and chlorophyll content) were determined. Shoots and roots were analyzed for their metals contents. Some interesting contributions of this research are: (i) plant metal uptake efficiency ranked as follows: A. halimus > M. lupulina > P. oleracea, whereas heavy metal toxicity ranked as follows: Ni > Zn > Pb, (ii) none of the plant species was identified as hyperaccumulator, (iii) Atriplex halimus and Medicago lupulina can accumulate Ni, Pb and Zn in their roots, (iv) translocate small fraction to their above ground biomass, and (v) indicate moderate pollution levels of the environment. In addition, as they are a good biomass producer, they can be used in phytostabilisation of marginal lands and their above ground biomass can be used for livestock feeding as well for bioenergy production.
Vítková, Martina; Puschenreiter, Markus; Komárek, Michael
2018-06-01
Characterisation of geochemical transformations and processes in soils with special focus on the rhizosphere is crucial for assessing metal(loid) bioavailability to plants during in situ immobilisation and phytostabilisation. In this study, the effects of nano zero-valent iron (nZVI) were investigated in terms of the immobilisation of As, Zn, Pb and Cd in two soil types and their potential uptake by plants using rhizobox experiments. Such system allowed monitoring the behaviour of trace elements in rooted and bulk soil compartments separately. Sunflower (Helianthus annuus L.) and ryegrass (Lolium perenne L.) were tested for As-rich (15.9 g As kg -1 ) and Zn-rich (4.1 g Zn kg -1 ) soil samples, respectively. The application of nZVI effectively lowered the uptake of all target risk elements into plant tissues. Efficient immobilisation of As was determined in the As-soil without a significant difference between plant and bulk soil compartments. Similarly, a significant decrease was determined for CaCl 2 -available fractions of Zn, Pb and Cd in nZVI-treated Zn-soil. The behaviour of As corresponded to changes in Eh, while Zn and Cd showed to be mainly pH-dependent. However, despite the observed stabilisation effect of nZVI, high amounts of As and Zn still remained available for plants. Furthermore, the accumulation of the target risk elements in roots and the overall effect of nZVI transformations in the rhizosphere were verified and visualised by SEM/EDS. The following immobilising mechanisms were suggested: (i) sorption onto both existing and newly formed Fe (hydr)oxides, (ii) formation of secondary Fe-As phases, and (iii) sorption onto Mn (hydr)oxides. Copyright © 2018 Elsevier Ltd. All rights reserved.
Murphy, A P; Coudert, M; Barker, J
2000-12-01
There have been a number of studies investigating metal uptake in plants on contaminated landfill sites, but little on their role as biomarkers to identify metal mobility for continuous monitoring purposes. Vegetation can be used as a biomonitor of site pollution, by identifying the mobilisation of heavy metals and by providing an understanding of their bioavailability. Plants selected were the common nettle (Uritica Dioica), bramble (Rubus Fruticosa) and sycamore (Acer Pseudoplatanus). A study of the soil fractionation was made to investigate the soil properties that are likely to influence metal mobility and a correlation exercise was undertaken to investigate if variations in concentration of metals in vegetation can reflect variations in concentration of the metals in soil. The soil was digested using aqua regia in a microwave closed vessel. The vegetation was digested using both microwave and a hydrogen peroxide-nitric acid mixture, refluxed on a heating block and a comparison made. The certified reference materials (CRMs) used were Standard Reference Material (SRM) 1547, peach leaves for vegetation (NIST) and for soil CRM 143R, sewage sludge-amended soil (BCR). The relative standard deviations (RSDs) were 2-6% for the analyses. Our findings show evidence of phytoextraction by some plants, (especially bramble and nettle), with certain plants, (sycamore) exhibiting signs of phytostabilisation. The evidence suggests that there is a degree of selectivity in metal uptake and partitioning within the plant compartments. It was also possible to correlate mobility phases of certain metals (Pb, Cu and Zn) using the soil and plant record. Zn and Cu exhibited the greatest potential to migrate from the roots to the leaves, with Pb found principally in the roots of ground vegetation. Our results suggest that analysis of bramble leaves, nettle leaves and roots can be used to monitor the mobility of Pb in the soil with nettle, bramble and sycamore leaves to monitor Cu and Zn.
Siebielec, Sylwia; Siebielec, Grzegorz; Stuczyński, Tomasz; Sugier, Piotr; Grzęda, Emilia; Grządziel, Jarosław
2018-09-15
Smelter wastelands containing high amounts of zinc, lead, cadmium, and arsenic constitute a major problem worldwide. Serious hazards for human health and ecosystem functioning are related to a lack of vegetative cover, causing fugitive dust fluxes, runoff and leaching of metals, affecting post-industrial ecosystems, often in heavily populated areas. Previous studies demonstrated the short term effectiveness of assisted phytostabilisation of zinc and lead smelter slags, using biosolids and liming. However, a long term persistence of plant communities introduced for remediation and risk reduction has not been adequately evaluated. The work was aimed at characterising trace element solubility, plant and microbial communities of the top layer of the reclaimed zinc and lead smelter waste heaps in Piekary Slaskie, Poland, 20 years after the treatment and revegetation. The surface layer of the waste heaps treated with various rates of biosolids and the by-product lime was sampled for measuring chemical and biochemical parameters, which are indicative for metals bioavailability as well as for microorganisms activity. Microbial processes were characterised by enzyme activities, abundance of specific groups of microorganisms and identification of N fixing bacteria. Plant communities of the area were characterised by a percent coverage of the surface and by a composition of plant species and plant diversity. The study provides a strong evidence that the implemented remediation approach enables a sustainable functioning of the ecosystem established on the toxic waste heaps. Enzyme activities and the count of various groups of microorganisms were the highest in areas treated with both biosolids and lime, regardless their rates. A high plant species diversity and microbial activities are sustainable after almost two decades from the treatment, which is indicative of a strong resistance of the established ecosystem to a metal stress and a poor physical quality of the anthropogenic soil formed by the treatment. Copyright © 2018 Elsevier B.V. All rights reserved.
Fresno, Teresa; Peñalosa, Jesús M; Santner, Jakob; Puschenreiter, Markus; Prohaska, Thomas; Moreno-Jiménez, Eduardo
2016-09-01
Arsenic is a non-threshold carcinogenic metalloid. Thus, human exposure should be minimised, e.g. by chemically stabilizing As in soil. Since iron is a potential As immobiliser, it was investigated whether root iron plaque, formed under aerobic conditions, affects As uptake, metabolism and distribution in Lupinus albus plants. White lupin plants were cultivated in a continuously aerated hydroponic culture containing Fe/EDDHA or FeSO4 and exposed to arsenate (5 or 20 μM). Only FeSO4 induced surficial iron plaque in roots. LA-ICP-MS analysis accomplished on root sections corroborated the association of As to this surficial Fe. Additionally, As(V) was the predominant species in FeSO4-treated roots, suggesting less efficient As uptake in the presence of iron plaque. Fe/EDDHA-exposed roots neither showed such surficial FeAs co-localisation nor As(V) accumulation; in contrast As(III) was the predominant species in root tissue. Furthermore, FeSO4-treated plants showed reduced shoot-to-root As ratios, which were >10-fold lower compared to Fe/EDDHA treatment. Our results highlight the role of an iron plaque formed in roots of white lupin under aerobic conditions on As immobilisation. These findings, to our knowledge, have not been addressed before for this plant and have potential implications on soil remediation (phytostabilisation) and food security (minimising As in crops). Copyright © 2016 Elsevier Ltd. All rights reserved.
Pardo, Tania; Martínez-Fernández, Domingo; Clemente, Rafael; Walker, David J; Bernal, M Pilar
2014-01-01
The applicability of a mature compost as a soil amendment to promote the growth of native species for the phytorestoration of a mine-affected soil from a semi-arid area (SE Spain), contaminated with trace elements (As, Cd, Cu, Mn, Pb and Zn), was evaluated in a 2-year field experiment. The effects of an inorganic fertiliser were also determined for comparison. Bituminaria bituminosa was the selected native plant since it is a leguminous species adapted to the particular local pedoclimatic conditions. Compost addition increased total organic-C concentrations in soil with respect to the control and fertiliser treatments, maintained elevated available P concentrations throughout the duration of the experiment and stimulated soil microbial biomass, while trace elements extractability in the soil was rather low due to the calcareous nature of the soil and almost unaltered in the different treatments. Tissue concentrations of P and K in B. bituminosa increased after the addition of compost, associated with growth stimulation. Leaf Cu concentration was also increased by the amendments, although overall the trace elements concentrations can be considered non-toxic. In addition, the spontaneous colonisation of the plots by a total of 29 species of 15 different families at the end of the experiment produced a greater vegetation cover, especially in plots amended with compost. Therefore, the use of compost as a soil amendment appears to be useful for the promotion of a vegetation cover and the phytostabilisation of moderately contaminated soils under semi-arid conditions.
Antioxidant response of Phragmites australis to Cu and Cd contamination.
Rocha, A Cristina S; Almeida, C Marisa R; Basto, M Clara P; Vasconcelos, M Teresa S D
2014-11-01
Metals are known to induce oxidative stress in plant cells. Antioxidant thiolic compounds are known to play an important role in plants׳ defence mechanisms against metal toxicity but, regarding salt marsh plants, their role is still very poorly understood. In this work, the involvement of non-protein thiols (NPT), such as cysteine (Cys), reduced glutathione (GSH), oxidised glutathione (GSSG) and total acid-soluble SH compounds (total thiols), in the tolerance mechanisms of the marsh plant Phragmites australis against Cu and Cd toxicity was assessed. Specimens of this plant, freshly harvested in an estuarine salt marsh, were exposed, for 7 days, to rhizosediment soaked with the respective elutriate contaminated with Cu (0, 10 and 100 mg/L) or Cd (0, 1, 10 mg/L). In terms of NPT production, Cu and Cd contamination induced different responses in P. australis. The content of Cys increased in plant tissue after plant exposure to Cu, whereas Cd contamination led to a decrease in GSSG levels. In general, metal contamination did not cause a significant variation on GSH levels. Both metals influenced, to some extent, the production of other thiolic compounds. Despite the accumulation of considerable amounts of Cu and Cd in belowground tissues, no visible toxicity signs were observed. So, antioxidant thiolic compounds were probably involved in the mechanisms used by P. australis to alleviate metal toxicity. As P. australis is considered suitable for phytostabilising metal-contaminated sediments, understanding its tolerance mechanisms to toxic metals is important to optimise the conditions for applying this plant in phytoremediation procedures. Copyright © 2014 Elsevier Inc. All rights reserved.
Metal transfer to plants grown on a dredged sediment: use of radioactive isotope 203Hg and titanium.
Caille, Nathalie; Vauleon, Clotilde; Leyval, Corinne; Morel, Jean-Louis
2005-04-01
Improperly disposed of dredged sediments contaminated with metals may induce long-term leaching and an increase of metal concentrations in ground waters and vegetal cover plants. The objective of the study was to quantify the sediment-to-plant transfer of Cu, Pb, Hg and Zn with a particular focus on the pathway of Hg and to determine whether the establishment of vegetal cover modifies the metal availability. A pot experiment with rape (Brassica napus), cabbage (Brassica oleraccea) and red fescue (Festuca rubra) was set up using a sediment first spiked with the radioisotope 203Hg. Zinc concentrations (197-543 mg kg(-1) DM) in leaves were higher than Cu concentration (197-543 mg kg(-1) DM), Pb concentration (2.3-2.6 mg kg(-1) DM) and Hg concentration (0.9-1.7 mg kg(-1) DM). Leaves-to-sediment ratios decreased as follows: Zn > Cu > Hg > Pb. According to Ti measurements, metal contamination by dry deposition was less than 1%. Mercury concentration in plant leaves was higher than European and French thresholds. Foliar absorption of volatile Hg was a major pathway for Hg contamination with a root absorption of Hg higher in rape than in cabbage and red fescue. Growth of each species increased Cu solubility. Zinc solubility was increased only in the presence of rape. The highest increase of Cu solubility was observed for red fescue whereas this species largely decreased Zn solubility. Dissolved organic carbon (DOC) measurements suggested that Cu solubilisation could result from organic matter or release of natural plant exudates. Dissolved inorganic carbon (DIC) measures suggested that the high Zn solubility in the presence of rape could originate from a generation of acidity in rape rhizosphere and a subsequent dissolution of calcium carbonates. Consequently, emission of volatile Hg from contaminated dredged sediments and also the potential increase of metal solubility by a vegetal cover of grass when used in phytostabilisation must be taken into account by decision makers.
Párraga-Aguado, I; González-Alcaraz, M N; López-Orenes, A; Ferrer-Ayala, M A; Conesa, H M
2016-10-01
Phytomanagement by phytostabilisation of metal(loid)-enriched mine tailings in semiarid areas has been proposed as a suitable technique to promote a self-sustainable vegetal cover for decreasing the spread of polluted particles by erosion. The goal of this work was to evaluate the contribution of a pioneer plant species (Zygophyllum fabago) in ameliorating the soil conditions at two mine tailings piles located in a semiarid area in Southeast Spain. The ecophysiological performance of this plant species compared to a control population was assessed by analysing the nutritional and ecophysiological status. The presence of Z. fabago in mine tailings enhanced the soil microbial activity and increased the content of soil organic carbon within the rhizosphere (approx. 50% increasing). Metal(loid) concentrations in the tailings may play a minor role in the establishment of Z. fabago plants due to the low metal(loid) availability in the tailings (low CaCl2-extractable concentrations) and low uptake in the plants (e.g. up to 300 mg kg(-1) Zn in leaves). The lower δ13C and δ18O in the plants sampled at both tailings compared to the control ones may indicate softer stomatal regulation in relation to the control site plants and therefore lower WUE [corrected]. The Z. fabago plants may skip some energy-demanding mechanisms such as stomatal control and/or proline synthesis to overcome the environmental stresses posed at the tailings. The Z. fabago plants revealed high plasticity of the species for adapting to the low fertility soil conditions of the tailings and to overcome constraints associated to the dry season. Copyright © 2016 Elsevier Ltd. All rights reserved.
The phyto-remediation of radioactively contaminated land - a feasible approach or just bananas?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesbitt, Victoria A
2013-07-01
Soil is an essential component of all terrestrial ecosystems and is under increasing threat from human activity. Techniques available for removing radioactive contamination from soil and aquatic substrates are limited and often costly to implement; particularly over large areas. Frequently, bulk soil removal, with its attendant consequences, is a significant component of the majority of contamination incidents. Alternative techniques capable of removing contamination or exposure pathways without damaging or removing the soil are therefore of significant interest. An increasing number of old nuclear facilities are entering 'care and maintenance', with significant ground contamination issues. Phyto-remediation - the use of plants'more » natural metabolic processes to remediate contaminated sites is one possible solution. Its key mechanisms include phyto-extraction and phyto-stabilisation. These are analogues of existing remedial techniques. Further, phyto-remediation can improve soil quality and stability and restore functionality. Information on the application of phyto-remediation in the nuclear industry is widely distributed over an extended period of time and sources. It is therefore difficult to quickly and effectively identify which plants would be most suitable for phyto-remediation on a site by site basis. In response, a phyto-remediation tool has been developed to address this issue. Existing research and case studies were reviewed to understand the mechanisms of phyto-remediation, its effectiveness and the benefits and limitations of implementation. The potential for cost recovery from a phyto-remediation system is also briefly considered. An overview of this information is provided here. From this data, a set of matrices was developed to guide potential users through the plant selection process. The matrices take the user through a preliminary screening process to determine whether the contamination present at their site is amenable to phyto-remediation, and to give a rough indication as to what plants might be suitable. The second two allow the user to target specific plant species that would be most likely to successfully establish based on prevailing site conditions. The outcome of this study is a phyto-remediation tool that can facilitate the development of phyto-remediation projects, avoiding the need for in-depth research to identify optimal plant species on a case-by-case basis. (authors)« less
Moreno-Jiménez, Eduardo; Vázquez, Saúl; Carpena-Ruiz, Ramón O; Esteban, Elvira; Peñalosa, Jesús M
2011-06-01
Re-vegetation is the main aim of ecological restoration projects, and in Mediterranean environments native plants are desirable to achieve successful restoration. In 1998, the burst of a tailings dam flooded the Guadiamar river valley downstream from Aznalcóllar (Southern Spain) with sludges that contained elevated concentrations of metals and metalloids, polluting soils and waters. A phytoremediation experiment to assess the potential use of native shrub species for the restoration of soils affected by the spillage was performed from 2005 to 2007, with soils divided into two groups: pH < 5 and pH > 5. Four native shrubs (Myrtus communis, Retama sphaerocarpa, Rosmarinus officinalis and Tamarix gallica) were planted and left to grow without intervention. Trace element concentrations in soils and plants, their extractability in soils, transfer factors and plant survival were used to identify the most-interesting species for phytoremediation. Total As was higher in soils with pH < 5. Ammonium sulphate-extractable zinc, copper, cadmium and aluminium concentrations were higher in very-acid soils, but arsenic was extracted more efficiently when soil pH was >5. Unlike As, which was either fixed by Fe oxides or retained as sulphide, the extractable metals showed significant relationships with the corresponding total soil metal concentration and inverse relationships with soil pH. T. gallica, R. officinalis and R. sphaerocarpa survived better in soils with pH > 5, while M. communis had better survival at pH < 5. R. sphaerocarpa showed the highest survival (30%) in all soils. Trace element transfer from soil to harvestable parts was low for all species and elements, and some species may have been able to decrease trace element availability in the soil. Our results suggest that R. sphaerocarpa is an adequate plant species for phytostabilising these soils, although more research is needed to address the self-sustainability of this remediation technique and the associated environmental changes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Abuhani, W A; Dasgupta-Schubert, N; Villaseñor, L M; García Avila, D; Suárez, L; Johnston, C; Borjas, S E; Alexander, S A; Landsberger, S; Suárez, M C
2015-01-01
The Los Azufres geothermal complex of central Mexico is characterized by fumaroles and boiling hot-springs. The fumaroles form habitats for extremophilic mosses and ferns. Physico-chemical measurements of two relatively pristine fumarolic microcosms point to their resemblance with the paleo-environment of earth during the Ordovician and Devonian periods. These geothermal habitats were analysed for the distribution of elemental mass fractions in the rhizospheric soil (RS), the native volcanic substrate (VS) and the sediments (S), using the new high-sensitivity technique of polarized x-ray energy dispersive fluorescence spectrometry (PEDXRF) as well as instrumental neutron activation analysis (INAA) for selected elements. This work presents the results for the naturally occurring heavy radioactive elements (NOHRE) Bi, Th and U but principally the latter two. For the RS, the density was found to be the least and the total organic matter content the most. Bi was found to be negligibly present in all substrate types. The average Th and U mass fractions in the RS were higher than in the VS and about equal to their average mass fractions in the S. The VS mass fraction of Th was higher, and of U lower, than the mass fractions in the earth's crust. In fact for the fumaroles of one site, the average RS mass fractions of these elements were higher than the averaged values for S (without considering the statistical dispersion). The immobilization of the NOHRE in the RS is brought about by the bio-geochemical processes specific to these extremophiles. Its effectiveness is such that despite the small masses of these plants, it compares with, or may sometimes exceed, the immobilization of the NOHRE in the S by the abiotic and aggressive chemical action of the hot-springs. These results indicate that the fumarolic plants are able to transform the volcanic substrate to soil and to affect the NOHRE mass fractions even though these elements are not plant nutrients. Mirrored back to the paleo times when such plant types were ubiquitous, it would mean that the first plants contributed significantly to pedogenesis and the biogeochemical recycling of even the heaviest and radioactive elements. Such plants may potentially be useful for the phytostabilisation of soil moderately contaminated by the NOHRE. Furthermore where applicable, geochronology may require taking into account the influence of the early plants on the NOHRE distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pierart, Antoine; Braud, Armelle; Lebeau, Thierry; Séjalon-Delmas, Nathalie; Dumat, Camille
2014-05-01
The European Environment Agency estimates that c.a. 250 000 sites required clean up and that about 100 000 ha could have been contaminated by metals in Europe. Numerous remediation techniques have been therefore tested and phytoremediation appears as a sustainable and low cost in situ technique particularly for large-scale remediation of polluted arable soils. Arbuscular Mycorrhizal Fungi (AMF) are already used in phytoextraction or phytostabilisation of many metal(loid)s (GU ET AL., 2013, SHARMA AND SHARMA, 2013). However, while plant inoculation with AMF will mostly result of an increase of the plant biomass, the response for lead accumulation in shoots is contrasted (LEBEAU ET AL., 2008). Furthermore, nothing is actually known for Sb transfer to plants phytoremediation-assisted AMF. Yet recently, many researches concern the accumulation of Sb in the environment, its (eco)toxicity and the risk of bioaccumulation in vegetables (FENG ET AL., 2013), especially in some China areas where Sb mining activities have widely contaminated arable lands (WU ET AL., 2011). Our research project, which is part of a national program for urban gardens (JASSUR, http://www.agence-nationale-recherche.fr), focused on polluted soils in associative urban gardens with both geogenic and anthropogenic origins for Pb and Sb. The impact of Pb and Sb on AMF density and diversity was studied using morphological and biomolecular approaches. The role of AMF symbiosis with Lettuce (Lactuca sativa L.) on Pb and Sb compartmentalization, speciation and phytoavailability was investigated. The influence of soil organic matters on these processes was examined. Eventually, the part of metal(loid)s available for humans in case of ingestion of lettuces unfit for human consumption (FOUCAULT ET AL., 2013; XIONG ET AL., 2013) will be assessed in relation with the influence of AMF symbiosis and organic matter. Key Words: Mycorrhiza, Antimony, Compartmentation, Speciation, Edible Plants, Urban Agriculture. References: Feng, R., Wei, C., Tu, S., Ding, Y., Wang, R., Guo, J., 2013. The uptake and detoxification of antimony by plants: a review. Environ. Exp. Bot. 96, 28-34. Foucault, Y., Lévêque, T., Xiong, T., Schreck, E., Austruy, A., Shahid, M., Dumat, C., 2013. Green manure plants for remediation of soils polluted by metals and metalloids: Ecotoxicity and human bioavailability assessment. Chemosphere 93, 1430-1435. Gu, H.H., Li, F.P., Yu, Q., Gao, Y.Q., Yuan, X.T., 2013. The Roles of Arbuscular Mycorrhizal Fungus Glomus mosseae and Festuca arundinacea in Phytostabilization of Lead/Zinc Tailings. Adv. Mater. Res. 699, 245-250. Lebeau, T., Braud, A., Jézéquel, K., 2008. Performance of bioaugmentation-assisted phytoextraction applied to metal contaminated soils: A review. Environ. Pollut. 153, 497-522. Sharma, A., Sharma, H., 2013. Role of Vesicular Arbuscular Mycorrhiza in the Mycoremediation of Heavy Toxic Metals From Soil. Int J LifeSc Bt Pharm Res 2, 2418-2431. Wu, F., Fu, Z., Liu, B., Mo, C., Chen, B., Corns, W., Liao, H., 2011. Health risk associated with dietary co-exposure to high levels of antimony and arsenic in the world's largest antimony mine area. Sci. Total Environ. 409, 3344-3351. Xiong, T., Austruy, A., Dappe, V., Leveque, T., Sobanska, S., Foucault, Y., Dumat, C., 2013. Phytotoxicity and bioaccessibility of metals for vegetables exposed to atmosphere fine particles in polluted urban areas". Urban Environmental Pollution, Asian Edition, 17-20, Beijing, China.
Bert, Valérie; Seuntjens, Piet; Dejonghe, Winnie; Lacherez, Sophie; Thuy, Hoang Thi Thanh; Vandecasteele, Bart
2009-11-01
Polluted sediments in rivers may be transported by the river to the sea, spread over river banks and tidal marshes or managed, i.e. actively dredged and disposed of on land. Once sedimented on tidal marshes, alluvial areas or control flood areas, the polluted sediments enter semi-terrestrial ecosystems or agro-ecosystems and may pose a risk. Disposal of polluted dredged sediments on land may also lead to certain risks. Up to a few years ago, contaminated dredged sediments were placed in confined disposal facilities. The European policy encourages sediment valorisation and this will be a technological challenge for the near future. Currently, contaminated dredged sediments are often not valorisable due to their high content of contaminants and their consequent hazardous properties. In addition, it is generally admitted that treatment and re-use of heavily contaminated dredged sediments is not a cost-effective alternative to confined disposal. For contaminated sediments and associated disposal facilities used in the past, a realistic, low cost, safe, ecologically sound and sustainable management option is required. In this context, phytoremediation is proposed in the literature as a management option. The aim of this paper is to review the current knowledge on management, (phyto)remediation and associated risks in the particular case of sediments contaminated with organic and inorganic pollutants. This paper deals with the following features: (1) management and remediation of contaminated sediments and associated risk assessment; (2) management options for ecosystems on polluted sediments, based on phytoremediation of contaminated sediments with focus on phytoextraction, phytostabilisation and phytoremediation of organic pollutants and (3) microbial and mycorrhizal processes occurring in contaminated sediments during phytoremediation. In this review, an overview is given of phytoremediation as a management option for semi-terrestrial and terrestrial ecosystems affected by polluted sediments, and the processes affecting pollutant bioavailability in the sediments. Studies that combine contaminated sediment and phytoremediation are relatively recent and are increasing in number since few years. Several papers suggest including phytoremediation in a management scheme for contaminated dredged sediments and state that phytoremediation can contribute to the revaluation of land-disposed contaminated sediments. The status of sediments, i.e. reduced or oxidised, highly influences contaminant mobility, its (eco)toxicity and the success of phytoremediation. Studies are performed either on near-fresh sediment or on sediment-derived soil. Field studies show temporal negative effects on plant growth due to oxidation and subsequent ageing of contaminated sediments disposed on land. The review shows that a large variety of plants and trees are able to colonise or develop on contaminated dredged sediment in particular conditions or events (e.g. high level of organic matter, clay and moisture content, flooding, seasonal hydrological variations). Depending on the studies, trees, high-biomass crop species and graminaceous species could be used to degrade organic pollutants, to extract or to stabilise inorganic pollutants. Water content of sediment is a limiting factor for mycorrhizal development. In sediment, specific bacteria may enhance the mobilisation of inorganic contaminants whereas others may participate in their immobilisation. Bacteria are also able to degrade organic pollutants. Their actions may be increased in the presence of plants. Choice of plants is particularly crucial for phytoremediation success on contaminated sediments. Extremely few studies are long-term field-based studies. Short-term effects and resilience of ecosystems is observed in long-term studies, i.e. due to degradation and stabilisation of pollutants. Terrestrial ecosystems affected by polluted sediments range from riverine tidal marshes with several interacting processes and vegetation development mainly determined by hydrology, over alluvial soils affected by overbank sedimentation (including flood control areas), to dredged sediment disposal facilities where hydrology and vegetation might be affected or managed by human intervention. This gradient is also a gradient of systems with highly variable soil and hydrological conditions in a temporal scale (tidal marshes) versus systems with a distinct soil development over time (dredged sediment landfill sites). In some circumstances (e.g. to avoid flooding or to ensure navigation) dredging operations are necessary. Management and remediation of contaminated sediments are necessary to reduce the ecological risks and risks associated with food chain contamination and leaching. Besides disposal, classical remediation technologies for contaminated sediment also extract or destroy contaminants. These techniques imply the sediment structure deterioration and prohibitive costs. On the contrary, phytoremediation could be a low-cost option, particularly suited to in situ remediation of large sites and environmentally friendly. However, phytoremediation is rarely included in the management scheme of contaminated sediment and accepted as a viable option. Phytoremediation is still an emerging technology that has to prove its sustainability at field scale. Research needs to focus on optimisations to enhance applicability and to address the economic feasibility of phytoremediation.
NASA Astrophysics Data System (ADS)
Kurnia, H.; Noerhadi, N. A. I.
2017-08-01
Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.
NASA Astrophysics Data System (ADS)
Nugrahani, F.; Jazaldi, F.; Noerhadi, N. A. I.
2017-08-01
The field of orthodontics is always evolving,and this includes the use of innovative technology. One type of orthodontic technology is the development of three-dimensional (3D) digital study models that replace conventional study models made by stone. This study aims to compare the mesio-distal teeth width, intercanine width, and intermolar width measurements between a 3D digital study model and a conventional study model. Twelve sets of upper arch dental impressions were taken from subjects with non-crowding teeth. The impressions were taken twice, once with alginate and once with polivinylsiloxane. The alginate impressions used in the conventional study model and the polivinylsiloxane impressions were scanned to obtain the 3D digital study model. Scanning was performed using a laser triangulation scanner device assembled by the School of Electrical Engineering and Informatics at the Institut Teknologi Bandung and David Laser Scan software. For the conventional model, themesio-distal width, intercanine width, and intermolar width were measured using digital calipers; in the 3D digital study model they were measured using software. There were no significant differences between the mesio-distal width, intercanine width, and intermolar width measurments between the conventional and 3D digital study models (p>0.05). Thus, measurements using 3D digital study models are as accurate as those obtained from conventional study models
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Gallium arsenide (GaAs) solar cell modeling studies
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.
1980-01-01
Various models were constructed which will allow for the variation of system components. Computer studies were then performed using the models constructed in order to study the effects of various system changes. In particular, GaAs and Si flat plate solar power arrays were studied and compared. Series and shunt resistance models were constructed. Models for the chemical kinetics of the annealing process were prepared. For all models constructed, various parametric studies were performed.
Endogenous Opioid Antagonism in Physiological Experimental Pain Models: A Systematic Review
Werner, Mads U.; Pereira, Manuel P.; Andersen, Lars Peter H.; Dahl, Jørgen B.
2015-01-01
Opioid antagonists are pharmacological tools applied as an indirect measure to detect activation of the endogenous opioid system (EOS) in experimental pain models. The objective of this systematic review was to examine the effect of mu-opioid-receptor (MOR) antagonists in placebo-controlled, double-blind studies using ʻinhibitoryʼ or ʻsensitizingʼ, physiological test paradigms in healthy human subjects. The databases PubMed and Embase were searched according to predefined criteria. Out of a total of 2,142 records, 63 studies (1,477 subjects [male/female ratio = 1.5]) were considered relevant. Twenty-five studies utilized ʻinhibitoryʼ test paradigms (ITP) and 38 studies utilized ʻsensitizingʼ test paradigms (STP). The ITP-studies were characterized as conditioning modulation models (22 studies) and repetitive transcranial magnetic stimulation models (rTMS; 3 studies), and, the STP-studies as secondary hyperalgesia models (6 studies), ʻpainʼ models (25 studies), summation models (2 studies), nociceptive reflex models (3 studies) and miscellaneous models (2 studies). A consistent reversal of analgesia by a MOR-antagonist was demonstrated in 10 of the 25 ITP-studies, including stress-induced analgesia and rTMS. In the remaining 14 conditioning modulation studies either absence of effects or ambiguous effects by MOR-antagonists, were observed. In the STP-studies, no effect of the opioid-blockade could be demonstrated in 5 out of 6 secondary hyperalgesia studies. The direction of MOR-antagonist dependent effects upon pain ratings, threshold assessments and somatosensory evoked potentials (SSEP), did not appear consistent in 28 out of 32 ʻpainʼ model studies. In conclusion, only in 2 experimental human pain models, i.e., stress-induced analgesia and rTMS, administration of MOR-antagonist demonstrated a consistent effect, presumably mediated by an EOS-dependent mechanisms of analgesia and hyperalgesia. PMID:26029906
Electrification Futures Study Modeling Approach | Energy Analysis | NREL
Electrification Futures Study Modeling Approach Electrification Futures Study Modeling Approach To quantitatively answer the research questions of the Electrification Futures Study, researchers will use multiple accounting for infrastructure inertia through stock turnover. Load Modeling The Electrification Futures Study
Staffaroni, Adam M; Eng, Megan E; Moses, James A; Zeiner, Harriet Katz; Wickham, Robert E
2018-05-01
A growing body of research supports the validity of 5-factor models for interpreting the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). The majority of these studies have utilized the WAIS-IV normative or clinical sample, the latter of which differs in its diagnostic composition from the referrals seen at outpatient neuropsychology clinics. To address this concern, 2 related studies were conducted on a sample of 322 American military Veterans who were referred for outpatient neuropsychological assessment. In Study 1, 4 hierarchical models with varying indicator configurations were evaluated: 3 extant 5-factor models from the literature and the traditional 4-factor model. In Study 2, we evaluated 3 variations in correlation structure in the models from Study 1: indirect hierarchical (i.e., higher-order g), bifactor (direct hierarchical), and oblique models. The results from Study 1 suggested that both 4- and 5-factor models showed acceptable fit. The results from Study 2 showed that bifactor and oblique models offer improved fit over the typically specified indirect hierarchical model, and the oblique models outperformed the orthogonal bifactor models. An exploratory analysis found improved fit when bifactor models were specified with oblique rather than orthogonal latent factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Theoretical models of parental HIV disclosure: a critical review.
Qiao, Shan; Li, Xiaoming; Stanton, Bonita
2013-01-01
This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.
NASA Astrophysics Data System (ADS)
Amalia, R.; Sari, I. M.; Sinaga, P.
2017-02-01
This research depended by previous studies that only to find out the misconceptions of students without figuring out the mechanism of the misconceptions. The mechanism of misconceptions can be studied more deeply with mental models. The purpose of this study was to find students ‘mental models of heat convection and its relation with students conception on heat and temperature. The method used in this study is exploratory mixed method design that implemented in one of the high schools in Bandung. The results showed that 7 mental models of heat convection in Chiou’s study (2013), only first model (diffusion-based convention), third model (evenly distributed convection) and fifth model (warmness topped convection II) were found and model hybrid convection as a new mental model. In addition, no specific relationship between mental models and categories of students’ conceptions on heat and temperature.
Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.
Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U
2015-05-01
The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Statistical considerations on prognostic models for glioma
Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.
2016-01-01
Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835
Beliefs and Gender Differences: A New Model for Research in Mathematics Education
ERIC Educational Resources Information Center
Li, Qing
2004-01-01
The major focus of this study is to propose a new research model, namely the Modified CGI gender model, for the study of gender differences in mathematics. This model is developed based on Fennema, Carpenter, and Peterson's (1989) CGI model. To examine the validity of this new model, this study also examines the gender differences in teacher and…
Empirically evaluating decision-analytic models.
Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J
2010-08-01
Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.
de Boer, Pieter T; Frederix, Geert W J; Feenstra, Talitha L; Vemer, Pepijn
2016-09-01
Transparent reporting of validation efforts of health economic models give stakeholders better insight into the credibility of model outcomes. In this study we reviewed recently published studies on seasonal influenza and early breast cancer in order to gain insight into the reporting of model validation efforts in the overall health economic literature. A literature search was performed in Pubmed and Embase to retrieve health economic modelling studies published between 2008 and 2014. Reporting on model validation was evaluated by checking for the word validation, and by using AdViSHE (Assessment of the Validation Status of Health Economic decision models), a tool containing a structured list of relevant items for validation. Additionally, we contacted corresponding authors to ask whether more validation efforts were performed other than those reported in the manuscripts. A total of 53 studies on seasonal influenza and 41 studies on early breast cancer were included in our review. The word validation was used in 16 studies (30 %) on seasonal influenza and 23 studies (56 %) on early breast cancer; however, in a minority of studies, this referred to a model validation technique. Fifty-seven percent of seasonal influenza studies and 71 % of early breast cancer studies reported one or more validation techniques. Cross-validation of study outcomes was found most often. A limited number of studies reported on model validation efforts, although good examples were identified. Author comments indicated that more validation techniques were performed than those reported in the manuscripts. Although validation is deemed important by many researchers, this is not reflected in the reporting habits of health economic modelling studies. Systematic reporting of validation efforts would be desirable to further enhance decision makers' confidence in health economic models and their outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, H.; Okuda, H.
We study linear and nonlinear properties of a new computer simulation model developed to study the propagation of electromagnetic waves in a dielectric medium in the linear and nonlinear regimes. The model is constructed by combining a microscopic model used in the semi-classical approximation for the dielectric media and the particle model developed for the plasma simulations. It is shown that the model may be useful for studying linear and nonlinear wave propagation in the dielectric media.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
Walden, Anita; Nahm, Meredith; Barnett, M Edwina; Conde, Jose G; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E; Eisenstein, Eric L
2011-01-01
New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs.
Walden, Anita; Nahm, Meredith; Barnett, M. Edwina; Conde, Jose G.; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E.; Eisenstein, Eric L.
2012-01-01
Background New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. Methods We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. Main Outcome Measures The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Results Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Conclusion Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs. PMID:21335692
Assessment of Effectiveness and Limitations of Habitat Suitability Models for Wetland Restoration
Draugelis-Dale, Rassa O.
2008-01-01
Habitat suitability index (HSI) models developed for wildlife in the Louisiana Coastal Area Comprehensive Ecosystem Restoration Plan (LCA study) have been assessed for parameter and overall model quality. The success of the suitability models from the South Florida Water Management District for The Everglades restoration project and from the Spatially Explicit Species Index Models (SESI) of the Across Trophic Level System Simulation (ATLSS) Program of Florida warranted investigation with possible application of modeling theory to the current LCA study. General HSI models developed by the U.S. Fish and Wildlife Service were also investigated. This report presents examinations of theoretical formulae and comparisons of the models, performed by using diverse hypothetical settings of hydrological/biological ecosystems to highlight weaknesses as well as strengths among the models, limited to the American alligator and selected wading bird species (great blue heron, great egret, and white ibis). Recommendations were made for the LCA study based on these assessments. An enhanced HSI model for the LCA study is proposed for the American alligator, and a new HSI model for wading birds is introduced for the LCA study. Performance comparisons of the proposed models with the other suitability models are made by using the aforementioned hypothetical settings.
Elçi, A; Karadaş, D; Fistikoğlu, O
2010-01-01
A numerical modeling case study of groundwater flow in a diffuse pollution prone area is presented. The study area is located within the metropolitan borders of the city of Izmir, Turkey. This groundwater flow model was unconventional in the application since the groundwater recharge parameter in the model was estimated using a lumped, transient water-budget based precipitation-runoff model that was executed independent of the groundwater flow model. The recharge rate obtained from the calibrated precipitation-runoff model was used as input to the groundwater flow model, which was eventually calibrated to measured water table elevations. Overall, the flow model results were consistent with field observations and model statistics were satisfactory. Water budget results of the model revealed that groundwater recharge comprised about 20% of the total water input for the entire study area. Recharge was the second largest component in the budget after leakage from streams into the subsurface. It was concluded that the modeling results can be further used as input for contaminant transport modeling studies in order to evaluate the vulnerability of water resources of the study area to diffuse pollution.
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
ERIC Educational Resources Information Center
St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma
A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…
Universal Session-Level Change Processes in an Early Session of Psychotherapy: Path Models
ERIC Educational Resources Information Center
Kolden, Gregory G.; Chisholm-Stockard, Sarah M.; Strauman, Timothy J.; Tierney, Sandy C.; Mullen, Elizabeth A.; Schneider, Kristin L.
2006-01-01
The authors used structural equation modeling to investigate universal change processes identified in the generic model of psychotherapy (GMP). Three path models of increasing complexity were examined in Study 1 in dynamic therapy. The best fitting model from Study one was replicated in Study two for participants receiving either cognitive or…
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
Theory, modeling, and integrated studies in the Arase (ERG) project
NASA Astrophysics Data System (ADS)
Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa
2018-02-01
Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.
ERIC Educational Resources Information Center
Cheng, Meng-Fei; Lin, Jang-Long
2015-01-01
Understanding the nature of models and engaging in modeling practice have been emphasized in science education. However, few studies discuss the relationships between students' views of scientific models and their ability to develop those models. Hence, this study explores the relationship between students' views of scientific models and their…
Five regional scale models with a horizontal domain covering the European continent and its surrounding seas, one hemispheric and one global scale model participated in an atmospheric mercury modelling intercomparison study. Model-predicted concentrations in ambient air were comp...
Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise
2017-08-25
The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.
Panken, Guus; Verhagen, Arianne P; Terwee, Caroline B; Heymans, Martijn W
2017-08-01
Study Design Systematic review and validation study. Background Many prognostic models of knee pain outcomes have been developed for use in primary care. Variability among published studies with regard to patient population, outcome measures, and relevant prognostic factors hampers the generalizability and implementation of these models. Objectives To summarize existing prognostic models in patients with knee pain in a primary care setting and to develop and internally validate new summary prognostic models. Methods After a sensitive search strategy, 2 reviewers independently selected prognostic models for patients with nontraumatic knee pain and assessed the methodological quality of the included studies. All predictors of the included studies were evaluated, summarized, and classified. The predictors assessed in multiple studies of sufficient quality are presented in this review. Using data from the Musculoskeletal System Study (BAS) cohort of patients with a new episode of knee pain, recruited consecutively by Dutch general medical practitioners (n = 372), we used predictors with a strong level of evidence to develop new prognostic models for each outcome measure and internally validated these models. Results Sixteen studies were eligible for inclusion. We considered 11 studies to be of sufficient quality. None of these studies validated their models. Five predictors with strong evidence were related to function and 6 to recovery, and were used to compose 2 prognostic models for patients with knee pain at 1 year. Running these new models in another data set showed explained variances (R 2 ) of 0.36 (function) and 0.33 (recovery). The area under the curve of the recovery model was 0.79. After internal validation, the adjusted R 2 values of the models were 0.30 (function) and 0.20 (recovery), and the area under the curve was 0.73. Conclusion We developed 2 valid prognostic models for function and recovery for patients with nontraumatic knee pain, based on predictors with strong evidence. A longer duration of complaints predicted poorer function but did not adequately predict chance of recovery. Level of Evidence Prognosis, levels 1a and 1b. J Orthop Sports Phys Ther 2017;47(8):518-529. Epub 16 Jun 2017. doi:10.2519/jospt.2017.7142.
Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan
2013-04-01
Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.
Lee, Keon Yong; Jang, Gun Hyuk; Byun, Cho Hyun; Jeun, Minhong
2017-01-01
Preclinical screening with animal models is an important initial step in clinical translation of new drug delivery systems. However, establishing efficacy, biodistribution, and biotoxicity of complex, multicomponent systems in small animal models can be expensive and time-consuming. Zebrafish models represent an alternative for preclinical studies for nanoscale drug delivery systems. These models allow easy optical imaging, large sample size, and organ-specific studies, and hence an increasing number of preclinical studies are employing zebrafish models. In this review, we introduce various models and discuss recent studies of nanoscale drug delivery systems in zebrafish models. Also in the end, we proposed a guideline for the preclinical trials to accelerate the progress in this field. PMID:28515222
Lee, Keon Yong; Jang, Gun Hyuk; Byun, Cho Hyun; Jeun, Minhong; Searson, Peter C; Lee, Kwan Hyi
2017-06-30
Preclinical screening with animal models is an important initial step in clinical translation of new drug delivery systems. However, establishing efficacy, biodistribution, and biotoxicity of complex, multicomponent systems in small animal models can be expensive and time-consuming. Zebrafish models represent an alternative for preclinical studies for nanoscale drug delivery systems. These models allow easy optical imaging, large sample size, and organ-specific studies, and hence an increasing number of preclinical studies are employing zebrafish models. In this review, we introduce various models and discuss recent studies of nanoscale drug delivery systems in zebrafish models. Also in the end, we proposed a guideline for the preclinical trials to accelerate the progress in this field. © 2017 The Author(s).
Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review.
Kolovos, Spyros; Bosmans, Judith E; Riper, Heleen; Chevreul, Karine; Coupé, Veerle M H; van Tulder, Maurits W
2017-09-01
An increasing number of model-based studies that evaluate the cost effectiveness of treatments for depression are being published. These studies have different characteristics and use different simulation methods. We aimed to systematically review model-based studies evaluating the cost effectiveness of treatments for depression and examine which modelling technique is most appropriate for simulating the natural course of depression. The literature search was conducted in the databases PubMed, EMBASE and PsycInfo between 1 January 2002 and 1 October 2016. Studies were eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined criteria. This methodological review included 41 model-based studies, of which 21 used decision trees (DTs), 15 used cohort-based state-transition Markov models (CMMs), two used individual-based state-transition models (ISMs), and three used discrete-event simulation (DES) models. Just over half of the studies (54%) evaluated antidepressants compared with a control condition. The data sources, time horizons, cycle lengths, perspectives adopted and number of health states/events all varied widely between the included studies. DTs scored positively in four of the 11 criteria, CMMs in five, ISMs in six, and DES models in seven. There were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation of the course of depression. However, direct comparisons between the available modelling techniques are necessary to yield firm conclusions.
Ethical issues in engineering models: an operations researcher's reflections.
Kleijnen, J
2011-09-01
This article starts with an overview of the author's personal involvement--as an Operations Research consultant--in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers, the clients, and the public at large. The article further presents an overview of codes of ethics in a variety of disciples. It discusses the role of mathematical models, focusing on the validation of these models' assumptions. Documentation of these model assumptions needs special attention. Some ethical norms and values may be quantified through the model's multiple performance measures, which might be optimized. The uncertainty about the validity of the model leads to risk or uncertainty analysis and to a search for robust models. Ethical questions may be pressing in military models, including war games. However, computer games and the related experimental economics may also provide a special tool to study ethical issues. Finally, the article briefly discusses whistleblowing. Its many references to publications and websites enable further study of ethical issues in modeling.
Watershed and Economic Data InterOperability (WEDO) ...
Watershed and Economic Data InterOperability (WEDO) is a system of information technologies designed to publish watershed modeling studies for reuse. WEDO facilitates three aspects of interoperability: discovery, evaluation and integration of data. This increased level of interoperability goes beyond the current practice of publishing modeling studies as reports or journal articles. Rather than summarized results, modeling studies can be published with their full complement of input data, calibration parameters and output with associated metadata for easy duplication by others. Reproducible science is possible only if researchers can find, evaluate and use complete modeling studies performed by other modelers. WEDO greatly increases transparency by making detailed data available to the scientific community.WEDO is a next generation technology, a Web Service linked to the EPA’s EnviroAtlas for discovery of modeling studies nationwide. Streams and rivers are identified using the National Hydrography Dataset network and stream IDs. Streams with modeling studies available are color coded in the EnviroAtlas. One can select streams within a watershed of interest to readily find data available via WEDO. The WEDO website is linked from the EnviroAtlas to provide a thorough review of each modeling study. WEDO currently provides modeled flow and water quality time series, designed for a broad range of watershed and economic models for nutrient trading market analysis. M
Kim, Jung-Hee; Shin, Sujin; Park, Jin-Hwa
2015-04-01
The purpose of this study was to evaluate the methodological quality of nursing studies using structural equation modeling in Korea. Databases of KISS, DBPIA, and National Assembly Library up to March 2014 were searched using the MeSH terms 'nursing', 'structure', 'model'. A total of 152 studies were screened. After removal of duplicates and non-relevant titles, 61 papers were read in full. Of the sixty-one articles retrieved, 14 studies were published between 1992 and 2000, 27, between 2001 and 2010, and 20, between 2011 and March 2014. The methodological quality of the review examined varied considerably. The findings of this study suggest that more rigorous research is necessary to address theoretical identification, two indicator rule, distribution of sample, treatment of missing values, mediator effect, discriminant validity, convergent validity, post hoc model modification, equivalent models issues, and alternative models issues should be undergone. Further research with robust consistent methodological study designs from model identification to model respecification is needed to improve the validity of the research.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Risk prediction models for graft failure in kidney transplantation: a systematic review.
Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen
2017-04-01
Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Herzog, Sereina A; Blaizot, Stéphanie; Hens, Niel
2017-12-18
Mathematical models offer the possibility to investigate the infectious disease dynamics over time and may help in informing design of studies. A systematic review was performed in order to determine to what extent mathematical models have been incorporated into the process of planning studies and hence inform study design for infectious diseases transmitted between humans and/or animals. We searched Ovid Medline and two trial registry platforms (Cochrane, WHO) using search terms related to infection, mathematical model, and study design from the earliest dates to October 2016. Eligible publications and registered trials included mathematical models (compartmental, individual-based, or Markov) which were described and used to inform the design of infectious disease studies. We extracted information about the investigated infection, population, model characteristics, and study design. We identified 28 unique publications but no registered trials. Focusing on compartmental and individual-based models we found 12 observational/surveillance studies and 11 clinical trials. Infections studied were equally animal and human infectious diseases for the observational/surveillance studies, while all but one between humans for clinical trials. The mathematical models were used to inform, amongst other things, the required sample size (n = 16), the statistical power (n = 9), the frequency at which samples should be taken (n = 6), and from whom (n = 6). Despite the fact that mathematical models have been advocated to be used at the planning stage of studies or surveillance systems, they are used scarcely. With only one exception, the publications described theoretical studies, hence, not being utilised in real studies.
A Model of Microteaching Lesson Study Implementation in the Prospective History Teacher Education
ERIC Educational Resources Information Center
Utami, Indah Wahyu Puji; Mashuri; Nafi'ah, Ulfatun
2016-01-01
Microteaching lesson study is a model to improve prospective teacher quality by incorporating several element of microteaching and lesson study. This study concern on the implementation of microteaching lesson study in prospective history teacher education. Microteaching lesson study model implemented in this study consist of three stages: plan,…
Ocular hemodynamics and glaucoma: the role of mathematical modeling.
Harris, Alon; Guidoboni, Giovanna; Arciero, Julia C; Amireskandari, Annahita; Tobe, Leslie A; Siesky, Brent A
2013-01-01
To discuss the role of mathematical modeling in studying ocular hemodynamics, with a focus on glaucoma. We reviewed recent literature on glaucoma, ocular blood flow, autoregulation, the optic nerve head, and the use of mathematical modeling in ocular circulation. Many studies suggest that alterations in ocular hemodynamics play a significant role in the development, progression, and incidence of glaucoma. Although there is currently a limited number of studies involving mathematical modeling of ocular blood flow, regulation, and diseases (such as glaucoma), preliminary modeling work shows the potential of mathematical models to elucidate the mechanisms that contribute most significantly to glaucoma progression. Mathematical modeling is a useful tool when used synergistically with clinical and laboratory data in the study of ocular blood flow and glaucoma. The development of models to investigate the relationship between ocular hemodynamic alterations and glaucoma progression will provide a unique and useful method for studying the pathophysiology of glaucoma.
Latent Growth and Dynamic Structural Equation Models.
Grimm, Kevin J; Ram, Nilam
2018-05-07
Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.
How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies
Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A
2013-01-01
A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026
Animal models of pancreatitis: Can it be translated to human pain study?
Zhao, Jing-Bo; Liao, Dong-Hua; Nissen, Thomas Dahl
2013-01-01
Chronic pancreatitis affects many individuals around the world, and the study of the underlying mechanisms leading to better treatment possibilities are important tasks. Therefore, animal models are needed to illustrate the basic study of pancreatitis. Recently, animal models of acute and chronic pancreatitis have been thoroughly reviewed, but few reviews address the important aspect on the translation of animal studies to human studies. It is well known that pancreatitis is associated with epigastric pain, but the understanding regarding to mechanisms and appropriate treatment of this pain is still unclear. Using animal models to study pancreatitis associated visceral pain is difficult, however, these types of models are a unique way to reveal the mechanisms behind pancreatitis associated visceral pain. In this review, the animal models of acute, chronic and un-common pancreatitis are briefly outlined and animal models related to pancreatitis associated visceral pain are also addressed. PMID:24259952
Human immune system mouse models of Ebola virus infection.
Spengler, Jessica R; Prescott, Joseph; Feldmann, Heinz; Spiropoulou, Christina F
2017-08-01
Human immune system (HIS) mice, immunodeficient mice engrafted with human cells (with or without donor-matched tissue), offer a unique opportunity to study pathogens that cause disease predominantly or exclusively in humans. Several HIS mouse models have recently been used to study Ebola virus (EBOV) infection and disease. The results of these studies are encouraging and support further development and use of these models in Ebola research. HIS mice provide a small animal model to study EBOV isolates, investigate early viral interactions with human immune cells, screen vaccines and therapeutics that modulate the immune system, and investigate sequelae in survivors. Here we review existing models, discuss their use in pathogenesis studies and therapeutic screening, and highlight considerations for study design and analysis. Finally, we point out caveats to current models, and recommend future efforts for modeling EBOV infection in HIS mice. Published by Elsevier B.V.
Spatio-temporal Bayesian model selection for disease mapping
Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K
2016-01-01
Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156
Evaluation of the ERP dispersion model using Darlington tracer-study data. Report No. 90-200-K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S.C.
1990-01-01
In this study, site-boundary atmospheric dilution factors calculated by the atmospheric dispersion model used in the ERP (Emergency Response Planning) computer code were compared to data collected during the Darlington tracer study. The purpose of this comparison was to obtain estimates of model uncertainty under a variety of conditions. This report provides background on ERP, the ERP dispersion model and the Darlington tracer study. Model evaluation techniques are discussed briefly, and the results of the comparison of model calculations with the field data are presented and reviewed.
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Estimating true instead of apparent survival using spatial Cormack-Jolly-Seber models
Schaub, Michael; Royle, J. Andrew
2014-01-01
Spatial CJS models enable study of dispersal and survival independent of study design constraints such as imperfect detection and size of the study area provided that some of the dispersing individuals remain in the study area. We discuss possible extensions of our model: alternative dispersal models and the inclusion of covariates and of a habitat suitability map.
ERIC Educational Resources Information Center
Ciltas, Alper; Isik, Ahmet
2013-01-01
The aim of this study was to examine the modelling skills of prospective elementary mathematics teachers who were studying the mathematical modelling method. The research study group was composed of 35 prospective teachers. The exploratory case analysis method was used in the study. The data were obtained via semi-structured interviews and a…
Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Nutaro, James J
This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge Sun
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10â15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...
Hutchinson, Amy K; Melia, Michele; Yang, Michael B; VanderVeen, Deborah K; Wilson, Lorri B; Lambert, Scott R
2016-04-01
To assess the accuracy with which available retinopathy of prematurity (ROP) predictive models detect clinically significant ROP and to what extent and at what risk these models allow for the reduction of screening examinations for ROP. A literature search of the PubMed and Cochrane Library databases was conducted last on May 1, 2015, and yielded 305 citations. After screening the abstracts of all 305 citations and reviewing the full text of 30 potentially eligible articles, the panel members determined that 22 met the inclusion criteria. One article included 2 studies, for a total of 23 studies reviewed. The panel extracted information about study design, study population, the screening algorithm tested, interventions, outcomes, and study quality. The methodologist divided the studies into 2 categories-model development and model validation-and assigned a level of evidence rating to each study. One study was rated level I evidence, 3 studies were rated level II evidence, and 19 studies were rated level III evidence. In some cohorts, some models would have allowed reductions in the number of infants screened for ROP without failing to identify infants requiring treatment. However, the small sample size and limited generalizability of the ROP predictive models included in this review preclude their widespread use to make all-or-none decisions about whether to screen individual infants for ROP. As an alternative, some studies proposed approaches to apply the models to reduce the number of examinations performed in low-risk infants. Additional research is needed to optimize ROP predictive model development, validation, and application before such models can be used widely to reduce the burdensome number of ROP screening examinations. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit
Wong, Rowena Syn Yin; Ismail, Noor Azina
2016-01-01
Background and Objectives There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. Methods This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. Results The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Conclusion Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes. PMID:27007413
An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.
Wong, Rowena Syn Yin; Ismail, Noor Azina
2016-01-01
There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.
Hydrological and hydraulic models for determination of flood-prone and flood inundation areas
NASA Astrophysics Data System (ADS)
Aksoy, Hafzullah; Sadan Ozgur Kirca, Veysel; Burgan, Halil Ibrahim; Kellecioglu, Dorukhan
2016-05-01
Geographic Information Systems (GIS) are widely used in most studies on water resources. Especially, when the topography and geomorphology of study area are considered, GIS can ease the work load. Detailed data should be used in this kind of studies. Because of, either the complication of the models or the requirement of highly detailed data, model outputs can be obtained fast only with a good optimization. The aim in this study, firstly, is to determine flood-prone areas in a watershed by using a hydrological model considering two wetness indexes; the topographical wetness index, and the SAGA (System for Automated Geoscientific Analyses) wetness index. The wetness indexes were obtained in the Quantum GIS (QGIS) software by using the Digital Elevation Model of the study area. Flood-prone areas are determined by considering the wetness index maps of the watershed. As the second stage of this study, a hydraulic model, HEC-RAS, was executed to determine flood inundation areas under different return period-flood events. River network cross-sections required for this study were derived from highly detailed digital elevation models by QGIS. Also river hydraulic parameters were used in the hydraulic model. Modelling technology used in this study is made of freely available open source softwares. Based on case studies performed on watersheds in Turkey, it is concluded that results of such studies can be used for taking precaution measures against life and monetary losses due to floods in urban areas particularly.
The Family-Study Interface and Academic Outcomes: Testing a Structural Model
ERIC Educational Resources Information Center
Meeuwisse, Marieke; Born, Marise Ph.; Severiens, Sabine E.
2011-01-01
Expanding on family-work and work-study models, this article investigated a model for family-study conflict and family-study facilitation. The focus of the study was the relationship of family-study conflict and family-study facilitation with students' effortful behaviors and academic performance among a sample of university students (N = 1,656).…
Kolls, Brad J; Lai, Amy H; Srinivas, Anang A; Reid, Robert R
2014-06-01
The purpose of this study was to determine the relative cost reductions within different staffing models for continuous video-electroencephalography (cvEEG) service by introducing a template system for 10/20 lead application. We compared six staffing models using decision tree modeling based on historical service line utilization data from the cvEEG service at our center. Templates were integrated into technologist-based service lines in six different ways. The six models studied were templates for all studies, templates for intensive care unit (ICU) studies, templates for on-call studies, templates for studies of ≤ 24-hour duration, technologists for on-call studies, and technologists for all studies. Cost was linearly related to the study volume for all models with the "templates for all" model incurring the lowest cost. The "technologists for all" model carried the greatest cost. Direct cost comparison shows that any introduction of templates results in cost savings, with the templates being used for patients located in the ICU being the second most cost efficient and the most practical of the combined models to implement. Cost difference between the highest and lowest cost models under the base case produced an annual estimated savings of $267,574. Implementation of the ICU template model at our institution under base case conditions would result in a $205,230 savings over our current "technologist for all" model. Any implementation of templates into a technologist-based cvEEG service line results in cost savings, with the most significant annual savings coming from using the templates for all studies, but the most practical implementation approach with the second highest cost reduction being the template used in the ICU. The lowered costs determined in this work suggest that a template-based cvEEG service could be supported at smaller centers with significantly reduced costs and could allow for broader use of cvEEG patient monitoring.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Geomagnetic field models for satellite angular motion studies
NASA Astrophysics Data System (ADS)
Ovchinnikov, M. Yu.; Penkov, V. I.; Roldugin, D. S.; Pichuzhkina, A. V.
2018-03-01
Four geomagnetic field models are discussed: IGRF, inclined, direct and simplified dipoles. Geomagnetic induction vector expressions are provided in different reference frames. Induction vector behavior is compared for different models. Models applicability for the analysis of satellite motion is studied from theoretical and engineering perspectives. Relevant satellite dynamics analysis cases using analytical and numerical techniques are provided. These cases demonstrate the benefit of a certain model for a specific dynamics study. Recommendations for models usage are summarized in the end.
Andrews, Arthur R.; Bridges, Ana J.; Gomez, Debbie
2014-01-01
Purpose The aims of the study were to evaluate the orthogonality of acculturation for Latinos. Design Regression analyses were used to examine acculturation in two Latino samples (N = 77; N = 40). In a third study (N = 673), confirmatory factor analyses compared unidimensional and bidimensional models. Method Acculturation was assessed with the ARSMA-II (Studies 1 and 2), and language proficiency items from the Children of Immigrants Longitudinal Study (Study 3). Results In Studies 1 and 2, the bidimensional model accounted for slightly more variance (R2Study 1 = .11; R2Study 2 = .21) than the unidimensional model (R2Study 1 = .10; R2Study 2 = .19). In Study 3, the bidimensional model evidenced better fit (Akaike information criterion = 167.36) than the unidimensional model (Akaike information criterion = 1204.92). Discussion/Conclusions Acculturation is multidimensional. Implications for Practice Care providers should examine acculturation as a bidimensional construct. PMID:23361579
Five regional scale models with a horizontal domain covering the European continent and its surrounding seas, two hemispheric and one global scale model participated in the atmospheric Hg modelling intercomparison study. The models were compared between each other and with availa...
ERIC Educational Resources Information Center
Mendonça, Paula Cristina Cardoso; Justi, Rosária
2013-01-01
Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities…
Berzins, Tiffany L.; Garcia, Antonio F.; Acosta, Melina; Osman, Augustine
2017-01-01
Two instrument validation studies broadened the research literature exploring the factor structure, internal consistency reliability, and concurrent validity of scores on the Social Anxiety and Depression Life Interference—24 Inventory (SADLI-24; Osman, Bagge, Freedenthal, Guiterrez, & Emmerich, 2011). Study 1 (N = 1065) was undertaken to concurrently appraise three competing factor models for the instrument: a unidimensional model, a two-factor oblique model and a bifactor model. The bifactor model provided the best fit to the study sample data. Study 2 (N = 220) extended the results from Study 1 with an investigation of the convergent and discriminant validity for the bifactor model of the SADLI-24 with multiple regression analyses and scale-level exploratory structural equation modeling. This project yields data that augments the initial instrument development investigations for the target measure. PMID:28781401
Modeling synchronous voltage source converters in transmission system planning studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosterev, D.N.
1997-04-01
A Voltage Source Converter (VSC) can be beneficial to power utilities in many ways. To evaluate the VSC performance in potential applications, the device has to be represented appropriately in planning studies. This paper addresses VSC modeling for EMTP, powerflow, and transient stability studies. First, the VSC operating principles are overviewed, and the device model for EMTP studies is presented. The ratings of VSC components are discussed, and the device operating characteristics are derived based on these ratings. A powerflow model is presented and various control modes are proposed. A detailed stability model is developed, and its step-by-step initialization proceduremore » is described. A simplified stability model is also derived under stated assumptions. Finally, validation studies are performed to demonstrate performance of developed stability models and to compare it with EMTP simulations.« less
NASA Astrophysics Data System (ADS)
Krell, Moritz; Walzer, Christine; Hergert, Susann; Krüger, Dirk
2017-09-01
As part of their professional competencies, science teachers need an elaborate meta-modelling knowledge as well as modelling skills in order to guide and monitor modelling practices of their students. However, qualitative studies about (pre-service) science teachers' modelling practices are rare. This study provides a category system which is suitable to analyse and to describe pre-service science teachers' modelling activities and to infer modelling strategies. The category system was developed based on theoretical considerations and was inductively refined within the methodological frame of qualitative content analysis. For the inductive refinement, modelling practices of pre-service teachers (n = 4) have been video-taped and analysed. In this study, one case was selected to demonstrate the application of the category system to infer modelling strategies. The contribution of this study for science education research and science teacher education is discussed.
RESULTS FROM THE NORTH AMERICAN MERCURY MODEL INTER-COMPARISON STUDY (NAMMIS)
A North American Mercury Model Intercomparison Study (NAMMIS) has been conducted to build upon the findings from previous mercury model intercomparison in Europe. In the absence of mercury measurement networks sufficient for model evaluation, model developers continue to rely on...
Void Growth and Coalescence Simulations
2013-08-01
distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
Numerical Modeling of River Ice Processes on the Lower Nelson River
NASA Astrophysics Data System (ADS)
Malenchak, Jarrod Joseph
Water resource infrastructure in cold regions of the world can be significantly impacted by the existence of river ice. Major engineering concerns related to river ice include ice jam flooding, the design and operation of hydropower facilities and other hydraulic structures, water supplies, as well as ecological, environmental, and morphological effects. The use of numerical simulation models has been identified as one of the most efficient means by which river ice processes can be studied and the effects of river ice be evaluated. The continued advancement of these simulation models will help to develop new theories and evaluate potential mitigation alternatives for these ice issues. In this thesis, a literature review of existing river ice numerical models, of anchor ice formation and modeling studies, and of aufeis formation and modeling studies is conducted. A high level summary of the two-dimensional CRISSP numerical model is presented as well as the developed freeze-up model with a focus specifically on the anchor ice and aufeis growth processes. This model includes development in the detailed heat transfer calculations, an improved surface ice mass exchange model which includes the rapids entrainment process, and an improved dry bed treatment model along with the expanded anchor ice and aufeis growth model. The developed sub-models are tested in an ideal channel setting as somewhat of a model confirmation. A case study of significant anchor ice and aufeis growth on the Nelson River in northern Manitoba, Canada, will be the primary field test case for the anchor ice and aufeis model. A second case study on the same river will be used to evaluate the surface ice components of the model in a field setting. The results from these cases studies will be used to highlight the capabilities and deficiencies in the numerical model and to identify areas of further research and model development.
Zhao, Yue; Hambleton, Ronald K.
2017-01-01
In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The study investigated the practical consequences of IRT model misfit in examining the equating performance and the classification of examinees into performance categories in a simulation study that mimics a typical large-scale statewide assessment program with mixed-format test data. The simulation study was implemented by varying three factors, including choice of IRT model, amount of growth/change of examinees’ abilities between two adjacent administration years, and choice of IRT scaling methods. Findings indicated that the extent of significant consequences of model misfit varied over the choice of model and IRT scaling methods. In comparison with mean/sigma (MS) and Stocking and Lord characteristic curve (SL) methods, separate calibration with linking and fixed common item parameter (FCIP) procedure was more sensitive to model misfit and more robust against various amounts of ability shifts between two adjacent administrations regardless of model fit. SL was generally the least sensitive to model misfit in recovering equating conversion and MS was the least robust against ability shifts in recovering the equating conversion when a substantial degree of misfit was present. The key messages from the study are that practical ways are available to study model fit, and, model fit or misfit can have consequences that should be considered when choosing an IRT model. Not only does the study address the consequences of IRT model misfit, but also it is our hope to help researchers and practitioners find practical ways to study model fit and to investigate the validity of particular IRT models for achieving a specified purpose, to assure that the successful use of the IRT models are realized, and to improve the applications of IRT models with educational and psychological test data. PMID:28421011
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Elshall, A. S.; Hanor, J. S.
2012-12-01
Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis provided insights on uncertainty priorities and propagation. In addition, it assisted in evaluating the relative importance of competing modeling propositions for each uncertain model component. By being able to dissect the uncertain model components and provide weighted representation of the competing propositions for each uncertain model component based on the background knowledge, the HBMA functions as an epistemic framework for advancing knowledge about the system under study.
Cost and schedule estimation study report
NASA Technical Reports Server (NTRS)
Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon
1993-01-01
This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.
ERIC Educational Resources Information Center
Rivas, Eugenia Marmolejo
2015-01-01
By means of three case studies, we will present two mathematical modelling activities that are suitable for students enrolled in senior high school and the first year of mathematics at university level. The activities have been designed to enrich the learning process and promote the formation of vital modelling skills. In case studies one and two,…
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.
Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng
2017-12-19
Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.
ERIC Educational Resources Information Center
Lu, Yi
2016-01-01
To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…
Modeling Leukemogenesis in the Zebrafish Using Genetic and Xenograft Models.
Rajan, Vinothkumar; Dellaire, Graham; Berman, Jason N
2016-01-01
The zebrafish is a widely accepted model to study leukemia. The major advantage of studying leukemogenesis in zebrafish is attributed to its short life cycle and superior imaging capacity. This chapter highlights using transgenic- and xenograft-based models in zebrafish to study a specific leukemogenic mutation and analyze therapeutic responses in vivo.
Further Studies into Synthetic Image Generation using CameoSim
2011-08-01
preparation of the validation effort a study of BRDF models has been completed, which includes the physical plausibility of models , how measured data...the visible to shortwave infrared. In preparation of the validation effort a study of BRDF models has been completed, which includes the physical...Example..................................................................................................................... 17 4. MODELLING BRDFS
Content Analysis of Research Trends in Instructional Design Models: 1999-2014
ERIC Educational Resources Information Center
Göksu, Idris; Özcan, Kursat Volkan; Çakir, Recep; Göktas, Yuksel
2017-01-01
This study examines studies on instructional design models by applying content analysis. It covers 113 papers published in 44 international Social Science Citation Index (SSCI) and Science Citation Index (SCI) journals. Studies on instructional design models are explored in terms of journal of publication, preferred model, country where the study…
A comparison of economic evaluation models as applied to geothermal energy technology
NASA Technical Reports Server (NTRS)
Ziman, G. M.; Rosenberg, L. S.
1983-01-01
Several cost estimation and financial cash flow models have been applied to a series of geothermal case studies. In order to draw conclusions about relative performance and applicability of these models to geothermal projects, the consistency of results was assessed. The model outputs of principal interest in this study were net present value, internal rate of return, or levelized breakeven price. The models used were VENVAL, a venture analysis model; the Geothermal Probabilistic Cost Model (GPC Model); the Alternative Power Systems Economic Analysis Model (APSEAM); the Geothermal Loan Guarantee Cash Flow Model (GCFM); and the GEOCOST and GEOCITY geothermal models. The case studies to which the models were applied include a geothermal reservoir at Heber, CA; a geothermal eletric power plant to be located at the Heber site; an alcohol fuels production facility to be built at Raft River, ID; and a direct-use, district heating system in Susanville, CA.
NASA Astrophysics Data System (ADS)
Cardoso Mendonça, Paula Cristina; Justi, Rosária
2013-09-01
Some studies related to the nature of scientific knowledge demonstrate that modelling is an inherently argumentative process. This study aims at discussing the relationship between modelling and argumentation by analysing data collected during the modelling-based teaching of ionic bonding and intermolecular interactions. The teaching activities were planned from the transposition of the main modelling stages that constitute the 'Model of Modelling Diagram' so that students could experience each of such stages. All the lessons were video recorded and their transcriptions supported the elaboration of case studies for each group of students. From the analysis of the case studies, we identified argumentative situations when students performed all of the modelling stages. Our data show that the argumentative situations were related to sense making, articulating and persuasion purposes, and were closely related to the generation of explanations in the modelling processes. They also show that representations are important resources for argumentation. Our results are consistent with some of those already reported in the literature regarding the relationship between modelling and argumentation, but are also divergent when they show that argumentation is not only related to the model evaluation phase.
2012-01-01
Background A relationship between current socio-economic position and subjective quality of life has been demonstrated, using wellbeing, life and needs satisfaction approaches. Less is known regarding the influence of different life course socio-economic trajectories on later quality of life. Several conceptual models have been proposed to help explain potential life course effects on health, including accumulation, latent, pathway and social mobility models. This systematic review aimed to assess whether evidence supported an overall relationship between life course socio-economic position and quality of life during adulthood and if so, whether there was support for one or more life course models. Methods A review protocol was developed detailing explicit inclusion and exclusion criteria, search terms, data extraction items and quality appraisal procedures. Literature searches were performed in 12 electronic databases during January 2012 and the references and citations of included articles were checked for additional relevant articles. Narrative synthesis was used to analyze extracted data and studies were categorized based on the life course model analyzed. Results Twelve studies met the eligibility criteria and used data from 10 datasets and five countries. Study quality varied and heterogeneity between studies was high. Seven studies assessed social mobility models, five assessed the latent model, two assessed the pathway model and three tested the accumulation model. Evidence indicated an overall relationship, but mixed results were found for each life course model. Some evidence was found to support the latent model among women, but not men. Social mobility models were supported in some studies, but overall evidence suggested little to no effect. Few studies addressed accumulation and pathway effects and study heterogeneity limited synthesis. Conclusions To improve potential for synthesis in this area, future research should aim to increase study comparability. Recommendations include testing all life course models within individual studies and the use of multiple measures of socio-economic position and quality of life. Comparable cross-national data would be beneficial to enable investigation of between-country differences. PMID:22873945
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
Scientific white paper on concentration-QTc modeling.
Garnett, Christine; Bonate, Peter L; Dang, Qianyu; Ferber, Georg; Huang, Dalong; Liu, Jiang; Mehrotra, Devan; Riley, Steve; Sager, Philip; Tornoe, Christoffer; Wang, Yaning
2018-06-01
The International Council for Harmonisation revised the E14 guideline through the questions and answers process to allow concentration-QTc (C-QTc) modeling to be used as the primary analysis for assessing the QTc interval prolongation risk of new drugs. A well-designed and conducted QTc assessment based on C-QTc modeling in early phase 1 studies can be an alternative approach to a thorough QT study for some drugs to reliably exclude clinically relevant QTc effects. This white paper provides recommendations on how to plan and conduct a definitive QTc assessment of a drug using C-QTc modeling in early phase clinical pharmacology and thorough QT studies. Topics included are: important study design features in a phase 1 study; modeling objectives and approach; exploratory plots; the pre-specified linear mixed effects model; general principles for model development and evaluation; and expectations for modeling analysis plans and reports. The recommendations are based on current best modeling practices, scientific literature and personal experiences of the authors. These recommendations are expected to evolve as their implementation during drug development provides additional data and with advances in analytical methodology.
A Case Study of Teachers' Development of Well-Structured Mathematical Modelling Activities
ERIC Educational Resources Information Center
Stohlmann, Micah; Maiorca, Cathrine; Allen, Charlie
2017-01-01
This case study investigated how three teachers developed mathematical modelling activities integrated with content standards through participation in a course on mathematical modelling. The class activities involved experiencing a mathematical modelling activity, reading and rating example mathematical modelling activities, reading articles about…
ERIC Educational Resources Information Center
Lee, Chia-Jung; Kim, ChanMin
2014-01-01
This study presents a refined technological pedagogical content knowledge (also known as TPACK) based instructional design model, which was revised using findings from the implementation study of a prior model. The refined model was applied in a technology integration course with 38 preservice teachers. A case study approach was used in this…
Air Pollution Exposure Modeling for Health Studies | Science ...
Dr. Michael Breen is leading the development of air pollution exposure models, integrated with novel personal sensor technologies, to improve exposure and risk assessments for individuals in health studies. He is co-investigator for multiple health studies assessing the exposure and effects of air pollutants. These health studies include participants with asthma, diabetes, and coronary artery disease living in various U.S. cities. He has developed, evaluated, and applied novel exposure modeling and time-activity tools, which includes the Exposure Model for Individuals (EMI), GPS-based Microenvironment Tracker (MicroTrac) and Exposure Tracker models. At this seminar, Dr. Breen will present the development and application of these models to predict individual-level personal exposures to particulate matter (PM) for two health studies in central North Carolina. These health studies examine the association between PM and adverse health outcomes for susceptible individuals. During Dr. Breen’s visit, he will also have the opportunity to establish additional collaborations with researchers at Harvard University that may benefit from the use of exposure models for cohort health studies. These research projects that link air pollution exposure with adverse health outcomes benefit EPA by developing model-predicted exposure-dose metrics for individuals in health studies to improve the understanding of exposure-response behavior of air pollutants, and to reduce participant
Crayton, Elise; Wolfe, Charles; Douiri, Abdel
2018-01-01
Objective We aim to identify and critically appraise clinical prediction models of mortality and function following ischaemic stroke. Methods Electronic databases, reference lists, citations were searched from inception to September 2015. Studies were selected for inclusion, according to pre-specified criteria and critically appraised by independent, blinded reviewers. The discrimination of the prediction models was measured by the area under the curve receiver operating characteristic curve or c-statistic in random effects meta-analysis. Heterogeneity was measured using I2. Appropriate appraisal tools and reporting guidelines were used in this review. Results 31395 references were screened, of which 109 articles were included in the review. These articles described 66 different predictive risk models. Appraisal identified poor methodological quality and a high risk of bias for most models. However, all models precede the development of reporting guidelines for prediction modelling studies. Generalisability of models could be improved, less than half of the included models have been externally validated(n = 27/66). 152 predictors of mortality and 192 predictors and functional outcome were identified. No studies assessing ability to improve patient outcome (model impact studies) were identified. Conclusions Further external validation and model impact studies to confirm the utility of existing models in supporting decision-making is required. Existing models have much potential. Those wishing to predict stroke outcome are advised to build on previous work, to update and adapt validated models to their specific contexts opposed to designing new ones. PMID:29377923
Porcine models of digestive disease: the future of large animal translational research
Gonzalez, Liara M.; Moeser, Adam J.; Blikslager, Anthony T.
2015-01-01
There is increasing interest in non-rodent translational models for the study of human disease. The pig, in particular, serves as a useful animal model for the study of pathophysiological conditions relevant to the human intestine. This review assesses currently used porcine models of gastrointestinal physiology and disease and provides a rationale for the use of these models for future translational studies. The pig has proven its utility for the study of fundamental disease conditions such as ischemia/ reperfusion injury, stress-induced intestinal dysfunction, and short bowel syndrome. Pigs have also shown great promise for the study of intestinal barrier function, surgical tissue manipulation and intervention, as well as biomaterial implantation and tissue transplantation. Advantages of pig models highlighted by these studies include the physiological similarity to human intestine as well as to mechanisms of human disease. Emerging future directions for porcine models of human disease include the fields of transgenics and stem cell biology, with exciting implications for regenerative medicine. PMID:25655839
Assessment of CMIP5 historical simulations of rainfall over Southeast Asia
NASA Astrophysics Data System (ADS)
Raghavan, Srivatsan V.; Liu, Jiandong; Nguyen, Ngoc Son; Vu, Minh Tue; Liong, Shie-Yui
2018-05-01
We present preliminary analyses of the historical (1986-2005) climate simulations of a ten-member subset of the Coupled Model Inter-comparison Project Phase 5 (CMIP5) global climate models over Southeast Asia. The objective of this study was to evaluate the general circulation models' performance in simulating the mean state of climate over this less-studied climate vulnerable region, with a focus on precipitation. Results indicate that most of the models are unable to reproduce the observed state of climate over Southeast Asia. Though the multi-model ensemble mean is a better representation of the observations, the uncertainties in the individual models are far high. There is no particular model that performed well in simulating the historical climate of Southeast Asia. There seems to be no significant influence of the spatial resolutions of the models on the quality of simulation, despite the view that higher resolution models fare better. The study results emphasize on careful consideration of models for impact studies and the need to improve the next generation of models in their ability to simulate regional climates better.
Vector models and generalized SYK models
Peng, Cheng
2017-05-23
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Assessment of the quality of reporting observational studies in the pediatric dental literature.
Butani, Yogita; Hartz, Arthur; Levy, Steven; Watkins, Catherine; Kanellis, Michael; Nowak, Arthur
2006-01-01
The purpose of this assessment was to evaluate reporting of observational studies in the pediatric dental literature. This assessment included the following steps: (1) developing a model for reporting information in clinical dentistry studies; (2) identifying treatment comparisons in pediatric dentistry that were evaluated by at least 5 observational studies; (3) abstracting from these studies any data indicated by applying the reporting model; and (4) comparing available data elements to the desired data elements in the reporting model. The reporting model included data elements related to: (1) patients; (2) providers; (3) treatment details; and (4) study design. Two treatment comparisons in pediatric dentistry were identified with 5 or more observational studies: (1) stainless steel crowns vs amalgams (10 studies); and (2) composite restorations vs amalgam (5 studies). Results from studies comparing the same treatments varied substantially. Data elements from the reporting model that could have explained some of the variation were often reported inadequately or not at all. Reporting of observational studies in the pediatric dental literature may be inadequate for an informed interpretation of the results. Models similar to that used in this study could be used for developing standards for the conduct and reporting of observational studies in pediatric dentistry.
One-Dimensional Modeling Studies of the Gaseous Electronics Conference RF Reference Cell
Govindan, T. R.; Meyyappan, M.
1995-01-01
A review of the one-dimensional modeling studies in the literature of the Gaseous Electronics Conference (GEC) reference plasma reactor is presented. Most of the studies are based on the fluid model description of the discharge and some utilize hybrid fluid-kinetic schemes. Both models are discussed here briefly. The models provide a basic understanding of the discharge mechanisms and reproduce several critical discharge features observed experimentally. PMID:29151755
Hüls, Anke; Frömke, Cornelia; Ickstadt, Katja; Hille, Katja; Hering, Johanna; von Münchhausen, Christiane; Hartmann, Maria; Kreienbrock, Lothar
2017-01-01
Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i) to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model) and (ii) to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate model. PMID:28620609
Atmospheric, Climatic, and Environmental Research
NASA Technical Reports Server (NTRS)
Broecker, Wallace S.; Gornitz, Vivien M.
1994-01-01
The climate and atmospheric modeling project involves analysis of basic climate processes, with special emphasis on studies of the atmospheric CO2 and H2O source/sink budgets and studies of the climatic role Of CO2, trace gases and aerosols. These studies are carried out, based in part on use of simplified climate models and climate process models developed at GISS. The principal models currently employed are a variable resolution 3-D general circulation model (GCM), and an associated "tracer" model which simulates the advection of trace constituents using the winds generated by the GCM.
2014-09-23
conduct simulations with a high-latitude data assimilation model. The specific objectives are to study magnetosphere-ionosphere ( M -I) coupling processes...based on three physics-based models, including a magnetosphere-ionosphere ( M -I) electrodynamics model, an ionosphere model, and a magnetic...inversion code. The ionosphere model is a high-resolution version of the Ionosphere Forecast Model ( IFM ), which is a 3-D, multi-ion model of the ionosphere
NOAA Atmospheric Sciences Modeling Division support to the US Environmental Protection Agency
NASA Astrophysics Data System (ADS)
Poole-Kober, Evelyn M.; Viebrock, Herbert J.
1991-07-01
During FY-1990, the Atmospheric Sciences Modeling Division provided meteorological research and operational support to the U.S. Environmental Protection Agency. Basic meteorological operational support consisted of applying dispersion models and conducting dispersion studies and model evaluations. The primary research effort was the development and evaluation of air quality simulation models using numerical and physical techniques supported by field studies. Modeling emphasis was on the dispersion of photochemical oxidants and particulate matter on urban and regional scales, dispersion in complex terrain, and the transport, transformation, and deposition of acidic materials. Highlights included expansion of the Regional Acid Deposition Model/Engineering Model family to consist of the Tagged Species Engineering Model, the Non-Depleting Model, and the Sulfate Tracking Model; completion of the Acid-MODES field study; completion of the RADM2.1 evaluation; completion of the atmospheric processes section of the National Acid Precipitation Assessment Program 1990 Integrated Assessment; conduct of the first field study to examine the transport and entrainment processes of convective clouds; development of a Regional Oxidant Model-Urban Airshed Model interface program; conduct of an international sodar intercomparison experiment; incorporation of building wake dispersion in numerical models; conduct of wind-tunnel simulations of stack-tip downwash; and initiation of the publication of SCRAM NEWS.
Lotfi, Tamara; Bou-Karroum, Lama; Darzi, Andrea; Hajjar, Rayan; El Rahyel, Ahmed; El Eid, Jamale; Itani, Mira; Brax, Hneine; Akik, Chaza; Osman, Mona; Hassan, Ghayda; El-Jardali, Fadi; Akl, Elie
2016-08-03
Our objective was to identify published models of coordination between entities funding or delivering health services in humanitarian crises, whether the coordination took place during or after the crises. We included reports describing models of coordination in sufficient detail to allow reproducibility. We also included reports describing implementation of identified models, as case studies. We searched Medline, PubMed, EMBASE, Cochrane Central Register of Controlled Trials, CINAHL, PsycINFO, and the WHO Global Health Library. We also searched websites of relevant organizations. We followed standard systematic review methodology. Our search captured 14,309 citations. The screening process identified 34 eligible papers describing five models of coordination of delivering health services: the "Cluster Approach" (with 16 case studies), the 4Ws "Who is Where, When, doing What" mapping tool (with four case studies), the "Sphere Project" (with two case studies), the "5x5" model (with one case study), and the "model of information coordination" (with one case study). The 4Ws and the 5x5 focus on coordination of services for mental health, the remaining models do not focus on a specific health topic. The Cluster approach appears to be the most widely used. One case study was a mixed implementation of the Cluster approach and the Sphere model. We identified no model of coordination for funding of health service. This systematic review identified five proposed coordination models that have been implemented by entities funding or delivering health service in humanitarian crises. There is a need to compare the effect of these different models on outcomes such as availability of and access to health services.
AAC Modeling with the iPad during Shared Storybook Reading Pilot Study
ERIC Educational Resources Information Center
Sennott, Samuel C.; Mason, Linda H.
2016-01-01
This pilot study describes an intervention package, MODELER for Read and Talk, designed to provide enriched language interaction for children with complex communication needs who require augmentative and alternative communication (AAC). MODELER (Model, Encourage, Respond) includes (a) modeling AAC as you speak, (b) encouraging communication…
Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model
USDA-ARS?s Scientific Manuscript database
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian
2017-06-05
Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.
Tran, Phoebe; Waller, Lance
2015-01-01
Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.
Using a nursing theory or a model in nursing PhD dissertations: a qualitative study from Turkey.
Mete, Samiye; Gokçe İsbir, Gozde
2015-04-01
The aim of this study was to reveal experiences of nursing students and their advisors using theories and models in their PhD dissertations. The study adopted a descriptive qualitative approach. This study was performed with 10 PhD candidates and their five advisors from nursing faculty. The results of the study were categorized into four. These are reasons for using a theory/model in a PhD dissertation, reasons for preferring a given model, causes of difficulties in using models in PhD dissertations, and facilitating factors of using theories and models in PhD of dissertations. It was also reported to contribute to the methodology of research and professional development of the students and advisors. © 2014 NANDA International, Inc.
Development of an Instructional Quality Assurance Model in Nursing Science
ERIC Educational Resources Information Center
Ajpru, Haruthai; Pasiphol, Shotiga; Wongwanich, Suwimon
2011-01-01
The purpose of this study was to develop an instructional quality assurance model in nursing science. The study was divided into 3 phases; (1) to study the information for instructional quality assurance model development (2) to develop an instructional quality assurance model in nursing science and (3) to audit and the assessment of the developed…
USDA-ARS?s Scientific Manuscript database
Infants and children with tuberculosis (TB) account for more than 20% of cases in endemic countries. Current animal models study TB during adulthood but animal models for adolescent and infant TB are scarce. Here we propose that minipigs can be used as an animal model to study adult, adolescent and ...
A Multivariate Model for the Study of Parental Acceptance-Rejection and Child Abuse.
ERIC Educational Resources Information Center
Rohner, Ronald P.; Rohner, Evelyn C.
This paper proposes a multivariate strategy for the study of parental acceptance-rejection and child abuse and describes a research study on parental rejection and child abuse which illustrates the advantages of using a multivariate, (rather than a simple-model) approach. The multivariate model is a combination of three simple models used to study…
Mohiuddin, Syed
2014-08-01
Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question, modelling techniques other than DES might also be appropriate in some cases.
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Introducing DeBRa: a detailed breast model for radiological studies
NASA Astrophysics Data System (ADS)
Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.
2009-07-01
Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.
A New Model for the Integration of Science and Mathematics: The Balance Model
ERIC Educational Resources Information Center
Kiray, S. Ahmet
2012-01-01
The aim of this study is to develop an integrated scientific and mathematical model that is suited to the background of Turkish teachers. The dimensions of the model are given and compared to the models which have been previously developed and the findings of earlier studies on the topic. The model is called the balance, reflecting the…
NASA Astrophysics Data System (ADS)
Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Cakiroglu, Erdinc; Alacaci, Cengiz; Cetinkaya, Bulent
2017-08-01
Applications and modelling have gained a prominent role in mathematics education reform documents and curricula. Thus, there is a growing need for studies focusing on the effective use of mathematical modelling in classrooms. Assessment is an integral part of using modelling activities in classrooms, since it allows teachers to identify and manage problems that arise in various stages of the modelling process. However, teachers' difficulties in assessing student modelling work are a challenge to be considered when implementing modelling in the classroom. Thus, the purpose of this study was to investigate how teachers' knowledge on generating assessment criteria for assessing student competence in mathematical modelling evolved through a professional development programme, which is based on a lesson study approach and modelling perspective. The data was collected with four teachers from two public high schools over a five-month period. The professional development programme included a cyclical process, with each cycle consisting of an introductory meeting, the implementation of a model-eliciting activity with students, and a follow-up meeting. The results showed that the professional development programme contributed to teachers' knowledge for generating assessment criteria on the products, and the observable actions that affect the modelling cycle.
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
2017-01-01
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Establishment of a tumor neovascularization animal model with biomaterials in rabbit corneal pouch.
Chu, Yu-Ping; Li, Hong-Chuan; Ma, Ling; Xia, Yang
2018-06-01
The present animal model of tumor neovascularization most often used by researchers is zebrafish. For studies on human breast cancer cell neovascularization, a new animal model was established to enable a more convenient study of tumor neovascularization. A sodium alginate-gelatin blend gel system was used to design the new animal model. The model was established using rabbit corneal pouch implantation. Then, the animal model was validated by human breast cancer cell lines MCF-7-Kindlin-2 and MCF-7-CMV. The experiment intuitively observed the relationship between tumor and neovascularization, and demonstrated the advantages of this animal model in the study of tumor neovascularization. The use of sodium alginate-gelatin blends to establish tumor neovascularization in a rabbit corneal pouch is a novel and ideal method for the study of neovascularization. It may be a better animal model for expanding the research in this area. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Dey, B.
1985-01-01
In this study, the existing seasonal snow cover area runoff forecasting models of the Indus, Kabul, Sutlej and Chenab basins were evaluated with the concurrent flow correlation model for the period 1975-79. In all the basins under study, correlation of concurrent flow model explained the variability in flow better than by the snow cover area runoff models. Actually, the concurrent flow correlation model explained more than 90 percent of the variability in the flow of these rivers. Compared to this model, the snow cover area runoff models explained less of the variability in flow. In the Himalayan river basins under study and at least for the period under observation, the concurrent flow correlation model provided a set of results with which to compare the estimates from the snow cover area runoff models.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics
NASA Astrophysics Data System (ADS)
Zaharieva, Roussislava; Hanagud, Sathya
2009-06-01
Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.
Organisational justice and mental health: a systematic review of prospective studies.
Ndjaboué, Ruth; Brisson, Chantal; Vézina, Michel
2012-10-01
The models most commonly used, to study the effects of psychosocial work factors on workers' health, are the demand-control-support (DCS) model and Effort-Reward Imbalance (ERI) model. An emerging body of research has identified Organisational Justice as another model that can help to explain deleterious health effects. This review aimed: (1) to identify prospective studies of the associations between organisational justice and mental health in industrialised countries from 1990 to 2010; (2) to evaluate the extent to which organisational justice has an effect on mental health independently of the DCS and ERI models; and (3) to discuss theoretical and empirical overlap and differences with previous models. The studies had to present associations between organisational justice and a mental health outcome, be prospective, and be entirely available in English or in French. Duplicated papers were excluded. Eleven prospective studies were selected for this review. They provide evidence that procedural justice and relational justice are associated with mental health. These associations remained significant even after controlling for the DCS and ERI models. There is a lack of prospective studies on distributive and informational justice. In conclusion, procedural and relational justice can be considered a different and complementary model to the DCS and ERI models. Future studies should evaluate the effect of change in exposure to organisational justice on employees' mental health over time.
Culturicon model: A new model for cultural-based emoticon
NASA Astrophysics Data System (ADS)
Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham
2017-10-01
Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.
A novel approach for inventory problem in the pharmaceutical supply chain.
Candan, Gökçe; Yazgan, Harun Reşit
2016-02-24
In pharmaceutical enterprises, keeping up with global market conditions is possible with properly selected supply chain management policies. Generally; demand-driven classical supply chain model is used in the pharmaceutical industry. In this study, a new mathematical model is developed to solve an inventory problem in the pharmaceutical supply chain. Unlike the studies in literature, the "shelf life and product transition times" constraints are considered, simultaneously, first time in the pharmaceutical production inventory problem. The problem is formulated as a mixed-integer linear programming (MILP) model with a hybrid time representation. The objective is to maximize total net profit. Effectiveness of the proposed model is illustrated considering a classical and a vendor managed inventory (VMI) supply chain on an experimental study. To show the effectiveness of the model, an experimental study is performed; which contains 2 different supply chain policy (Classical and VMI), 24 and 30 months planning horizon, 10 and 15 different cephalosporin products. Finally the mathematical model is compared to another model in literature and the results show that proposed model is superior. This study suggest a novel approach for solving pharmaceutical inventory problem. The developed model is maximizing total net profit while determining optimal production plan under shelf life and product transition constraints in the pharmaceutical industry. And we believe that the proposed model is much more closed to real life unlike the other studies in literature.
Godugu, Chandraiah; Singh, Mandip
2016-01-01
Routinely used two-dimensional cell culture-based models often fail while translating the observations into in vivo models. This setback is more common in cancer research, due to several reasons. The extracellular matrix and cell-to-cell interactions are not present in two-dimensional (2D) cell culture models. Diffusion of drug molecules into cancer cells is hindered by barriers of extracellular components in in vivo conditions, these barriers are absent in 2D cell culture models. To better mimic or simulate the in vivo conditions present in tumors, the current study used the alginate based three-dimensional cell culture (AlgiMatrix™) model, which resembles close to the in vivo tumor models. The current study explains the detailed protocols involved in AlgiMatrix™ based in vitro non-small-cell lung cancer (NSCLC) models. The suitability of this model was studied by evaluating, cytotoxicity, apoptosis, and penetration of nanoparticles into the in vitro tumor spheroids. This study also demonstrated the effect of EphA2 receptor targeted docetaxel-loaded nanoparticles on MDA-MB-468 TNBC cell lines. The methods section is subdivided into three subsections such as (1) preparation of AlgiMatrix™-based 3D in vitro tumor models and cytotoxicity assays, (2) free drug and nanoparticle uptake into spheroid studies, and (3) western blot, IHC, and RT-PCR studies.
Students' use of atomic and molecular models in learning chemistry
NASA Astrophysics Data System (ADS)
O'Connor, Eileen Ann
1997-09-01
The objective of this study was to investigate the development of introductory college chemistry students' use of atomic and molecular models to explain physical and chemical phenomena. The study was conducted during the first semester of the course at a University and College II. Public institution (Carnegie Commission of Higher Education, 1973). Students' use of models was observed during one-on-one interviews conducted over the course of the semester. The approach to introductory chemistry emphasized models. Students were exposed to over two-hundred and fifty atomic and molecular models during lectures, were assigned text readings that used over a thousand models, and worked interactively with dozens of models on the computer. These models illustrated various features of the spatial organization of valence electrons and nuclei in atoms and molecules. Despite extensive exposure to models in lectures, in textbook, and in computer-based activities, the students in the study based their explanation in large part on a simple Bohr model (electrons arranged in concentric circles around the nuclei)--a model that had not been introduced in the course. Students used visual information from their models to construct their explanation, while overlooking inter-atomic and intra-molecular forces which are not represented explicitly in the models. In addition, students often explained phenomena by adding separate information about the topic without either integrating or logically relating this information into a cohesive explanation. The results of the study demonstrate that despite the extensive use of models in chemistry instruction, students do not necessarily apply them appropriately in explaining chemical and physical phenomena. The results of this study suggest that for the power of models as aids to learning to be more fully realized, chemistry professors must give more attention to the selection, use, integration, and limitations of models in their instruction.
Safety analytics for integrating crash frequency and real-time risk modeling for expressways.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-01
To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selecting global climate models for regional climate change studies
Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.
2009-01-01
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
Iwelunmor, Juliet; Newsome, Valerie; Airhihenbuwa, Collins O
2014-02-01
This paper reviews available studies that applied the PEN-3 cultural model to address the impact of culture on health behaviors. We search electronic databases and conducted a thematic analysis of empirical studies that applied the PEN-3 cultural model to address the impact of culture on health behaviors. Studies were mapped to describe their methods, target population and the health behaviors or health outcomes studied. Forty-five studies met the inclusion criteria. The studies reviewed used the PEN-3 model as a theoretical framework to centralize culture in the study of health behaviors and to integrate culturally relevant factors in the development of interventions. The model was also used as an analysis tool, to sift through text and data in order to separate, define and delineate emerging themes. PEN-3 model was also significant with exploring not only how cultural context shapes health beliefs and practices, but also how family systems play a critical role in enabling or nurturing positive health behaviors and health outcomes. Finally, the studies reviewed highlighted the utility of the model with examining cultural practices that are critical to positive health behaviors, unique practices that have a neutral impact on health and the negative factors that are likely to have an adverse influence on health. The limitations of model and the role for future studies are discussed relative to the importance of using PEN-3 cultural model to explore the influence of culture in promoting positive health behaviors, eliminating health disparities and designing and implementing sustainable public health interventions.
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
Modeller's attitude in catchment modelling: a comparative study
NASA Astrophysics Data System (ADS)
Battista Chirico, Giovanni
2010-05-01
Ten modellers have been invited to predict, independently from each other, the discharge of the artificial Chicken Creek catchment in North-East Germany for simulation period of three years, providing them only soil texture, terrain and meteorological data. No data concerning the discharge or other sources of state variables and fluxes within the catchment have been provided. Modellers had however the opportunity to visit the experimental catchment and inspect areal photos of the catchments since its initial development stage. This study has been a unique comparative study focussing on how different modellers deal with the key issues in predicting the discharge in ungauged catchments: 1) choice of the model structure; 2) identification of model parameters; 3) identification of model initial and boundary conditions. The first general lesson learned during this study was that the modeller is just part of the entire modelling process and has a major bearing on the model results, particularly in ungauged catchments where there are more degrees of freedom in making modelling decisions. Modellers' attitudes during the stages of the model implementation and parameterisation have been deeply influenced by their own experience from previous modelling studies. A common outcome was that modellers have been mainly oriented to apply process-based models able to exploit the available data concerning the physical properties of the catchment and therefore could be more suitable to cope with the lack of data concerning state variables or fluxes. The second general lesson learned during this study was the role of dominant processes. We believed that the modelling task would have been much easier in an artificial catchment, where heterogeneity were expected to be negligible and processes simpler, than in catchments that have evolved over a longer time period. The results of the models were expected to converge, and this would have been a good starting point to proceed for a model comparison in natural, more challenging catchments. This model comparison showed instead that even a small artificial catchment exhibits heterogeneities which lead to similar modelling problems as in natural catchments. We also verified that qualitative knowledge of the potential surface processes, such as that could be gained by visual inspection of the catchment (erosion marks, canopy features, soil crusting, ect.), have been vastly employed by the modellers to guess the dominant processes to be modelled and therefore to make choices on model structure and guesses of model parameters. The two lessons learned from this intercomparison study are closely linked. The experience of a modeller is crucial in the (subjective) process of deciding upon the dominant processes that seem to be sufficiently important to be incorporated into the model. On the other hand, the cumulated experience will also play an important role in how different pieces of evidence from, for example, field inspections, will modify the initial conceptual understanding.
Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review
Speybroeck, Niko; Van Malderen, Carine; Harper, Sam; Müller, Birgit; Devleesschauwer, Brecht
2013-01-01
Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks. PMID:24192788
Becker, Nina I; Encarnação, Jorge A
2015-01-01
Species distribution and endangerment can be assessed by habitat-suitability modelling. This study addresses methodical aspects of habitat suitability modelling and includes an application example in actual species conservation and landscape planning. Models using species presence-absence data are preferable to presence-only models. In contrast to species presence data, absences are rarely recorded. Therefore, many studies generate pseudo-absence data for modelling. However, in this study model quality was higher with null samples collected in the field. Next to species data the choice of landscape data is crucial for suitability modelling. Landscape data with high resolution and ecological relevance for the study species improve model reliability and quality for small elusive mammals like Muscardinus avellanarius. For large scale assessment of species distribution, models with low-detailed data are sufficient. For regional site-specific conservation issues like a conflict-free site for new wind turbines, high-detailed regional models are needed. Even though the overlap with optimally suitable habitat for M. avellanarius was low, the installation of wind plants can pose a threat due to habitat loss and fragmentation. To conclude, modellers should clearly state the purpose of their models and choose the according level of detail for species and environmental data.
2015-01-01
Species distribution and endangerment can be assessed by habitat-suitability modelling. This study addresses methodical aspects of habitat suitability modelling and includes an application example in actual species conservation and landscape planning. Models using species presence-absence data are preferable to presence-only models. In contrast to species presence data, absences are rarely recorded. Therefore, many studies generate pseudo-absence data for modelling. However, in this study model quality was higher with null samples collected in the field. Next to species data the choice of landscape data is crucial for suitability modelling. Landscape data with high resolution and ecological relevance for the study species improve model reliability and quality for small elusive mammals like Muscardinus avellanarius. For large scale assessment of species distribution, models with low-detailed data are sufficient. For regional site-specific conservation issues like a conflict-free site for new wind turbines, high-detailed regional models are needed. Even though the overlap with optimally suitable habitat for M. avellanarius was low, the installation of wind plants can pose a threat due to habitat loss and fragmentation. To conclude, modellers should clearly state the purpose of their models and choose the according level of detail for species and environmental data. PMID:25781894
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
ERIC Educational Resources Information Center
Jung, Jae Yup
2014-01-01
This study developed and empirically tested two related models of the occupational/career decision-making processes of gifted adolescents using a competing models strategy. The two models that guided the study, which acknowledged cultural orientations, social influences from the family, occupational/career values, and characteristics of…
ERIC Educational Resources Information Center
Zhou, Bo; Konstorum, Anna; Duong, Thao; Tieu, Kinh H.; Wells, William M.; Brown, Gregory G.; Stern, Hal S.; Shahbaba, Babak
2013-01-01
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model…
ERIC Educational Resources Information Center
Zuzovsky, Ruth; Donitsa-Schmidt, Smadar
2017-01-01
The purpose of the present study was to examine the effectiveness of two common models of initial teacher education programmes that are prevalent in many countries, including Israel. The two are: the concurrent model, in which disciplinary studies and pedagogical studies are integrated and taught at the same time; and the consecutive model, which…
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Aragón-Noriega, Eugenio Alberto
2013-09-01
Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.
Khurelbaatar, Tsolmonbaatar; Kim, Kyungsoo; Hyuk Kim, Yoon
2015-11-01
Computational musculoskeletal models have been developed to predict mechanical joint loads on the human spine, such as the forces and moments applied to vertebral and facet joints and the forces that act on ligaments and muscles because of difficulties in the direct measurement of joint loads. However, many whole-spine models lack certain elements. For example, the detailed facet joints in the cervical region or the whole spine region may not be implemented. In this study, a detailed cervico-thoraco-lumbar multibody musculoskeletal model with all major ligaments, separated structures of facet contact and intervertebral disk joints, and the rib cage was developed. The model was validated by comparing the intersegmental rotations, ligament tensile forces, facet joint contact forces, compressive and shear forces on disks, and muscle forces were to those reported in previous experimental and computational studies both by region (cervical, thoracic, or lumbar regions) and for the whole model. The comparisons demonstrated that our whole spine model is consistent with in vitro and in vivo experimental studies and with computational studies. The model developed in this study can be used in further studies to better understand spine structures and injury mechanisms of spinal disorders.
Contact tracing of tuberculosis: a systematic review of transmission modelling studies.
Begun, Matt; Newall, Anthony T; Marks, Guy B; Wood, James G
2013-01-01
The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy.
Contact Tracing of Tuberculosis: A Systematic Review of Transmission Modelling Studies
Begun, Matt; Newall, Anthony T.; Marks, Guy B.; Wood, James G.
2013-01-01
The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy. PMID:24023742
Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.
Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis
2016-07-01
Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.
A Comparison of Three Approaches to Model Human Behavior
NASA Astrophysics Data System (ADS)
Palmius, Joel; Persson-Slumpi, Thomas
2010-11-01
One way of studying social processes is through the use of simulations. The use of simulations for this purpose has been established as its own field, social simulations, and has been used for studying a variety of phenomena. A simulation of a social setting can serve as an aid for thinking about that social setting, and for experimenting with different parameters and studying the outcomes caused by them. When using the simulation as an aid for thinking and experimenting, the chosen simulation approach will implicitly steer the simulationist towards thinking in a certain fashion in order to fit the model. To study the implications of model choice on the understanding of a setting where human anticipation comes into play, a simulation scenario of a coffee room was constructed using three different simulation approaches: Cellular Automata, Systems Dynamics and Agent-based modeling. The practical implementations of the models were done in three different simulation packages: Stella for Systems Dynamic, CaFun for Cellular automata and SesAM for Agent-based modeling. The models were evaluated both using Randers' criteria for model evaluation, and through introspection where the authors reflected upon how their understanding of the scenario was steered through the model choice. Further the software used for implementing the simulation models was evaluated, and practical considerations for the choice of software package are listed. It is concluded that the models have very different strengths. The Agent-based modeling approach offers the most intuitive support for thinking about and modeling a social setting where the behavior of the individual is in focus. The Systems Dynamics model would be preferable in situations where populations and large groups would be studied as wholes, but where individual behavior is of less concern. The Cellular Automata models would be preferable where processes need to be studied from the basis of a small set of very simple rules. It is further concluded that in most social simulation settings the Agent-based modeling approach would be the probable choice. This since the other models does not offer much in the way of supporting the modeling of the anticipatory behavior of humans acting in an organization.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Artificial Neural Network versus Linear Models Forecasting Doha Stock Market
NASA Astrophysics Data System (ADS)
Yousif, Adil; Elfaki, Faiz
2017-12-01
The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.
Kaneko, Masato; Tanigawa, Takahiko; Hashizume, Kensei; Kajikawa, Mariko; Tajiri, Masahiro; Mueck, Wolfgang
2013-01-01
This study was designed to confirm the appropriateness of the dose setting for a Japanese phase III study of rivaroxaban in patients with non-valvular atrial fibrillation (NVAF), which had been based on model simulation employing phase II study data. The previously developed mixed-effects pharmacokinetic/pharmacodynamic (PK-PD) model, which consisted of an oral one-compartment model parameterized in terms of clearance, volume and a first-order absorption rate, was rebuilt and optimized using the data for 597 subjects from the Japanese phase III study, J-ROCKET AF. A mixed-effects modeling technique in NONMEM was used to quantify both unexplained inter-individual variability and inter-occasion variability, which are random effect parameters. The final PK and PK-PD models were evaluated to identify influential covariates. The empirical Bayes estimates of AUC and C(max) from the final PK model were consistent with the simulated results from the Japanese phase II study. There was no clear relationship between individual estimated exposures and safety-related events, and the estimated exposure levels were consistent with the global phase III data. Therefore, it was concluded that the dose selected for the phase III study with Japanese NVAF patients by means of model simulation employing phase II study data had been appropriate from the PK-PD perspective.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin
2016-08-01
Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.
Lockwood, Penelope; Marshall, Tara C; Sadler, Pamela
2005-03-01
In two studies, cross-cultural differences in reactions to positive and negative role models were examined. The authors predicted that individuals from collectivistic cultures, who have a stronger prevention orientation, would be most motivated by negative role models, who highlight a strategy of avoiding failure; individuals from individualistic cultures, who have a stronger promotion focus, would be most motivated by positive role models, who highlight a strategy of pursuing success. In Study 1, the authors examined participants' reported preferences for positive and negative role models. Asian Canadian participants reported finding negative models more motivating than did European Canadians; self-construals and regulatory focus mediated cultural differences in reactions to role models. In Study 2, the authors examined the impact of role models on the academic motivation of Asian Canadian and European Canadian participants. Asian Canadians were motivated only by a negative model, and European Canadians were motivated only by a positive model.
Estimating parameter values of a socio-hydrological flood model
NASA Astrophysics Data System (ADS)
Holkje Barendrecht, Marlies; Viglione, Alberto; Kreibich, Heidi; Vorogushyn, Sergiy; Merz, Bruno; Blöschl, Günter
2018-06-01
Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.
Economic modeling of HIV treatments.
Simpson, Kit N
2010-05-01
To review the general literature on microeconomic modeling and key points that must be considered in the general assessment of economic modeling reports, discuss the evolution of HIV economic models and identify models that illustrate this development over time, as well as examples of current studies. Recommend improvements in HIV economic modeling. Recent economic modeling studies of HIV include examinations of scaling up antiretroviral (ARV) in South Africa, screening prior to use of abacavir, preexposure prophylaxis, early start of ARV in developing countries and cost-effectiveness comparisons of specific ARV drugs using data from clinical trials. These studies all used extensively published second-generation Markov models in their analyses. There have been attempts to simplify approaches to cost-effectiveness estimates by using simple decision trees or cost-effectiveness calculations with short-time horizons. However, these approaches leave out important cumulative economic effects that will not appear early in a treatment. Many economic modeling studies were identified in the 'gray' literature, but limited descriptions precluded an assessment of their adherence to modeling guidelines, and thus to the validity of their findings. There is a need for developing third-generation models to accommodate new knowledge about adherence, adverse effects, and viral resistance.
Chen, Jing
2017-04-01
This study calculates and compares the lifetime lung cancer risks associated with indoor radon exposure based on well-known risk models in the literature; two risk models are from joint studies among miners and the other three models were developed from pooling studies on residential radon exposure from China, Europe and North America respectively. The aim of this article is to make clear that the various models are mathematical descriptions of epidemiologically observed real risks in different environmental settings. The risk from exposure to indoor radon is real and it is normal that variations could exist among different risk models even when they were applied to the same dataset. The results show that lifetime risk estimates vary significantly between the various risk models considered here: the model based on the European residential data provides the lowest risk estimates, while models based on the European miners and Chinese residential pooling with complete dosimetry give the highest values. The lifetime risk estimates based on the EPA/BEIR-VI model lie within this range and agree reasonably well with the averages of risk estimates from the five risk models considered in this study. © Crown copyright 2016.
Bayesian inference in camera trapping studies for a class of spatial capture-recapture models
Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba
2009-01-01
We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.
Fathallah, F A; Marras, W S; Parnianpour, M
1999-09-01
Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.
Demonstration of reduced-order urban scale building energy models
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...
2017-09-08
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Demonstration of reduced-order urban scale building energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
Sniekers, Yvonne H; Intema, Femke; Lafeber, Floris P J G; van Osch, Gerjo J V M; van Leeuwen, Johannes P T M; Weinans, Harrie; Mastbergen, Simon C
2008-02-12
This study evaluates changes in peri-articular bone in two canine models for osteoarthritis: the groove model and the anterior cruciate ligament transection (ACLT) model. Evaluation was performed at 10 and 20 weeks post-surgery and in addition a 3-weeks time point was studied for the groove model. Cartilage was analysed, and architecture of the subchondral plate and trabecular bone of epiphyses was quantified using micro-CT. At 10 and 20 weeks cartilage histology and biochemistry demonstrated characteristic features of osteoarthritis in both models (very mild changes at 3 weeks). The groove model presented osteophytes only at 20 weeks, whereas the ACLT model showed osteophytes already at 10 weeks. Trabecular bone changes in the groove model were small and not consistent. This contrasts the ACLT model in which bone volume fraction was clearly reduced at 10 and 20 weeks (15-20%). However, changes in metaphyseal bone indicate unloading in the ACLT model, not in the groove model. For both models the subchondral plate thickness was strongly reduced (25-40%) and plate porosity was strongly increased (25-85%) at all time points studied. These findings show differential regulation of subchondral trabecular bone in the groove and ACLT model, with mild changes in the groove model and more severe changes in the ACLT model. In the ACLT model, part of these changes may be explained by unloading of the treated leg. In contrast, subchondral plate thinning and increased porosity were very consistent in both models, independent of loading conditions, indicating that this thinning is an early response in the osteoarthritis process.
A study comparison of two system model performance in estimated lifted index over Indonesia.
NASA Astrophysics Data System (ADS)
lestari, Juliana tri; Wandala, Agie
2018-05-01
Lifted index (LI) is one of atmospheric stability indices that used for thunderstorm forecasting. Numerical weather Prediction Models are essential for accurate weather forecast these day. This study has completed the attempt to compare the two NWP models these are Weather Research Forecasting (WRF) model and Global Forecasting System (GFS) model in estimates LI at 20 locations over Indonesia and verified the result with observation. Taylor diagram was used to comparing the models skill with shown the value of standard deviation, coefficient correlation and Root mean square error (RMSE). This study using the dataset on 00.00 UTC and 12.00 UTC during mid-March to Mid-April 2017. From the sample of LI distributions, both models have a tendency to overestimated LI value in almost all region in Indonesia while the WRF models has the better ability to catch the LI pattern distribution with observation than GFS model has. The verification result shows how both WRF and GFS model have such a weak relationship with observation except Eltari meteorologi station that its coefficient correlation reach almost 0.6 with the low RMSE value. Mean while WRF model have a better performance than GFS model. This study suggest that estimated LI of WRF model can provide the good performance for Thunderstorm forecasting over Indonesia in the future. However unsufficient relation between output models and observation in the certain location need a further investigation.
How motivation affects academic performance: a structural equation modelling analysis.
Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G
2013-03-01
Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.
Barker, Charlotte I.S.; Germovsek, Eva; Hoare, Rollo L.; Lestner, Jodi M.; Lewis, Joanna; Standing, Joseph F.
2014-01-01
Pharmacokinetic/pharmacodynamic (PKPD) modelling is used to describe and quantify dose–concentration–effect relationships. Within paediatric studies in infectious diseases and immunology these methods are often applied to developing guidance on appropriate dosing. In this paper, an introduction to the field of PKPD modelling is given, followed by a review of the PKPD studies that have been undertaken in paediatric infectious diseases and immunology. The main focus is on identifying the methodological approaches used to define the PKPD relationship in these studies. The major findings were that most studies of infectious diseases have developed a PK model and then used simulations to define a dose recommendation based on a pre-defined PD target, which may have been defined in adults or in vitro. For immunological studies much of the modelling has focused on either PK or PD, and since multiple drugs are usually used, delineating the relative contributions of each is challenging. The use of dynamical modelling of in vitro antibacterial studies, and paediatric HIV mechanistic PD models linked with the PK of all drugs, are emerging methods that should enhance PKPD-based recommendations in the future. PMID:24440429
Study of indoor radon distribution using measurements and CFD modeling.
Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K
2014-10-01
Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fostering Transfer of Study Strategies: A Spiral Model.
ERIC Educational Resources Information Center
Davis, Denise M.; Clery, Carolsue
1994-01-01
Describes the design and implementation of a Spiral Model for the introduction and repeated practice of study strategies, based on Taba's model for social studies. In a college reading and studies strategies course, key strategies were introduced early and used through several sets of humanities and social and physical sciences readings. (Contains…
Mathematical Modelling Research in Turkey: A Content Analysis Study
ERIC Educational Resources Information Center
Çelik, H. Coskun
2017-01-01
The aim of the present study was to examine the mathematical modelling studies done between 2004 and 2015 in Turkey and to reveal their tendencies. Forty-nine studies were selected using purposeful sampling based on the term, "mathematical modelling" with Higher Education Academic Search Engine. They were analyzed with content analysis.…
Two simple models of classical heat pumps.
Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek
2007-03-01
Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.
Modelling the solar wind interaction with Mercury by a quasi-neutral hybrid model
NASA Astrophysics Data System (ADS)
Kallio, E.; Janhunen, P.
2003-11-01
Quasi-neutral hybrid model is a self-consistent modelling approach that includes positively charged particles and an electron fluid. The approach has received an increasing interest in space plasma physics research because it makes it possible to study several plasma physical processes that are difficult or impossible to model by self-consistent fluid models, such as the effects associated with the ions’ finite gyroradius, the velocity difference between different ion species, or the non-Maxwellian velocity distribution function. By now quasi-neutral hybrid models have been used to study the solar wind interaction with the non-magnetised Solar System bodies of Mars, Venus, Titan and comets. Localized, two-dimensional hybrid model runs have also been made to study terrestrial dayside magnetosheath. However, the Hermean plasma environment has not yet been analysed by a global quasi-neutral hybrid model.
Ergun, Bahadir; Sahin, Cumhur; Baz, Ibrahim; Ustuntas, Taner
2010-06-01
Terrestrial laser scanning is a popular methodology that is used frequently in the process of documenting historical buildings and cultural heritage. The historical peninsula region sprawls over an area of approximately 1,500 ha and is one of the main aggregate areas of the historical buildings in Istanbul. In this study, terrestrial laser scanning and close range photogrammetry techniques are integrated into each other to create a 3D city model of this part of Istanbul, including some of the buildings that represent the most brilliant areas of Byzantine and Ottoman Empires. Several terrestrial laser scanners with their different specifications were used to solve various geometric scanning problems for distinct areas of the subject city. Photogrammetric method was used for the documentation of the façades of these historical buildings for architectural purposes. This study differentiates itself from the similar ones by its application process that focuses on the geometry, the building texture, and density of the study area. Nowadays, the largest-scale studies among 3D modeling studies, in terms of the methodology of measurement, are urban modeling studies. Because of this large scale, the application of 3D urban modeling studies is executed in a gradual way. In this study, a modeling method based on the façades of the streets was used. In addition, the complimentary elements for the process of modeling were combined in several ways. A street model was presented as a sample, as being the subject of the applied study. In our application of 3D modeling, the modeling based on close range photogrammetry and the data of combined calibration with the data of terrestrial laser scanner were used in a compatible way. The final work was formed with the pedestal data for 3D visualization.
Teaching suturing in a workshop setting: a comparison of several models.
Tokuhara, Keith G; Boldt, David W; Yamamoto, Loren G
2004-09-01
Suturing is taught in workshops using a variety of models. The purpose of this study is to compare the resemblance to human skin of four models commonly used to teach suturing: pig skin, beef tongue, hot dog and latex glove. 5 centimeter biconvex incisions were made in each of the models and closed by 50 physician study volunteers comprised of 33 board-certified physicians and 17 resident physicians. They rated each model on a scale of 1 to 4, where 4 closely resembles human skin and 1 does not resemble human skin. The following mean ratings were given by study volunteers: beef tongue 3.5 +/- 0.5, pig skin 3.2 +/- 0.8, latex glove 1.6 +/- 0.7, hot dog 1.4 +/- 0.6. Beef tongue and pig skin were rated highest by study volunteers. However, pig skin is much cheaper than beef tongue. Pig skin is the best inexpensive model for teaching skin suturing of the four models studied.
Impacts of weighting climate models for hydro-meteorological climate change studies
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe; Caya, Daniel
2017-06-01
Weighting climate models is controversial in climate change impact studies using an ensemble of climate simulations from different climate models. In climate science, there is a general consensus that all climate models should be considered as having equal performance or in other words that all projections are equiprobable. On the other hand, in the impacts and adaptation community, many believe that climate models should be weighted based on their ability to better represent various metrics over a reference period. The debate appears to be partly philosophical in nature as few studies have investigated the impact of using weights in projecting future climate changes. The present study focuses on the impact of assigning weights to climate models for hydrological climate change studies. Five methods are used to determine weights on an ensemble of 28 global climate models (GCMs) adapted from the Coupled Model Intercomparison Project Phase 5 (CMIP5) database. Using a hydrological model, streamflows are computed over a reference (1961-1990) and future (2061-2090) periods, with and without post-processing climate model outputs. The impacts of using different weighting schemes for GCM simulations are then analyzed in terms of ensemble mean and uncertainty. The results show that weighting GCMs has a limited impact on both projected future climate in term of precipitation and temperature changes and hydrology in terms of nine different streamflow criteria. These results apply to both raw and post-processed GCM model outputs, thus supporting the view that climate models should be considered equiprobable.
System Operations Studies : Feeder System Model. User's Manual.
DOT National Transportation Integrated Search
1982-11-01
The Feeder System Model (FSM) is one of the analytic models included in the System Operations Studies (SOS) software package developed for urban transit systems analysis. The objective of the model is to assign a proportion of the zone-to-zone travel...
ERIC Educational Resources Information Center
Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Cakiroglu, Erdinc; Alacaci, Cengiz; Cetinkaya, Bulent
2017-01-01
Applications and modelling have gained a prominent role in mathematics education reform documents and curricula. Thus, there is a growing need for studies focusing on the effective use of mathematical modelling in classrooms. Assessment is an integral part of using modelling activities in classrooms, since it allows teachers to identify and manage…
Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium
ERIC Educational Resources Information Center
Harrison, Allan G.; De Jong, Onno
2005-01-01
This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…
Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models
ERIC Educational Resources Information Center
Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.
2016-01-01
This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…
ERIC Educational Resources Information Center
Gong, Yu
2017-01-01
This study investigates how students can use "interactive example models" in inquiry activities to develop their conceptual knowledge about an engineering phenomenon like electromagnetic fields and waves. An interactive model, for example a computational model, could be used to develop and teach principles of dynamic complex systems, and…
ERIC Educational Resources Information Center
Kozan, Kadir
2016-01-01
The present study investigated the relationships among teaching, cognitive, and social presence through several structural equation models to see which model would better fit the data. To this end, the present study employed and compared several different structural equation models because different models could fit the data equally well. Among…
ERIC Educational Resources Information Center
Yurt, Eyup; Sunbul, Ali Murat
2012-01-01
In this study, the effect of modeling based activities using virtual environments and concrete objects on spatial thinking and mental rotation skills was investigated. The study was designed as a pretest-posttest model with a control group, which is one of the experimental research models. The study was carried out on sixth grade students…
ERIC Educational Resources Information Center
Buhrman, Danielle
2017-01-01
This study uses components of action and self-study research to examine the design and enactment of modeling tasks with the goal of developing student modeling abilities. The author, a secondary mathematics teacher, first closely examined the curriculum design and instructional decisions she made as she prepared for a unit on mathematical modeling…
ERIC Educational Resources Information Center
Abu-Hilal, Maher M.
A study tested predictions for I/E (internal external) frame of reference model and extended this model to include locus of control. A sample of upper elementary (n=181) and junior high (n=191) students in the United Arab Emirates participated in the study. Structural equation modeling (SEM) analyses provided support to the external comparison…
A Systematic Review of Studies on Leadership Models in Educational Research from 1980 to 2014
ERIC Educational Resources Information Center
Gumus, Sedat; Bellibas, Mehmet Sukru; Esen, Murat; Gumus, Emine
2018-01-01
The purpose of this study is to reveal the extent to which different leadership models in education are studied, including the change in the trends of research on each model over time, the most prominent scholars working on each model, and the countries in which the articles are based. The analysis of the related literature was conducted by first…
ERIC Educational Resources Information Center
Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema
2014-01-01
Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…
ERIC Educational Resources Information Center
Xiang, Lin
2011-01-01
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…
Development of a Learning Model for Enhancing Social Skills on Elementary Students
ERIC Educational Resources Information Center
Traisorn, Rattanaporn; Soonthornrojana, Wimonrat; Chano, Jiraporn
2015-01-01
The goals of this study were: 1) to study the situation, problems and needs for a learning model to enhance the social skills of sixth grade students; 2) to develop a learning model that would address those needs; 3) to study the effectiveness of that learning model; 4) to compare performance on pretests and posttests of social skills; and 5) to…
Dust environment of an airless object: A phase space study with kinetic models
NASA Astrophysics Data System (ADS)
Kallio, E.; Dyadechkin, S.; Fatemi, S.; Holmström, M.; Futaana, Y.; Wurz, P.; Fernandes, V. A.; Álvarez, F.; Heilimo, J.; Jarvinen, R.; Schmidt, W.; Harri, A.-M.; Barabash, S.; Mäkelä, J.; Porjo, N.; Alho, M.
2016-01-01
The study of dust above the lunar surface is important for both science and technology. Dust particles are electrically charged due to impact of the solar radiation and the solar wind plasma and, therefore, they affect the plasma above the lunar surface. Dust is also a health hazard for crewed missions because micron and sub-micron sized dust particles can be toxic and harmful to the human body. Dust also causes malfunctions in mechanical devices and is therefore a risk for spacecraft and instruments on the lunar surface. Properties of dust particles above the lunar surface are not fully known. However, it can be stated that their large surface area to volume ratio due to their irregular shape, broken chemical bonds on the surface of each dust particle, together with the reduced lunar environment cause the dust particles to be chemically very reactive. One critical unknown factor is the electric field and the electric potential near the lunar surface. We have developed a modelling suite, Dusty Plasma Environments: near-surface characterisation and Modelling (DPEM), to study globally and locally dust environments of the Moon and other airless bodies. The DPEM model combines three independent kinetic models: (1) a 3D hybrid model, where ions are modelled as particles and electrons are modelled as a charged neutralising fluid, (2) a 2D electrostatic Particle-in-Cell (PIC) model where both ions and electrons are treated as particles, and (3) a 3D Monte Carlo (MC) model where dust particles are modelled as test particles. The three models are linked to each other unidirectionally; the hybrid model provides upstream plasma parameters to be used as boundary conditions for the PIC model which generates the surface potential for the MC model. We have used the DPEM model to study properties of dust particles injected from the surface of airless objects such as the Moon, the Martian moon Phobos and the asteroid RQ36. We have performed a (v0, m/q)-phase space study where the property of dust particles at different initial velocity (v0) and initial mass per charge (m/q) ratio were analysed. The study especially identifies regions in the phase space where the electric field within a non-quasineutral plasma region above the surface of the object, the Debye layer, becomes important compared with the gravitational force. Properties of the dust particles in the phase space region where the electric field plays an important role are studied by a 3D Monte Carlo model. The current DPEM modelling suite does not include models of how dust particles are initially injected from the surface. Therefore, the presented phase space study cannot give absolute 3D dust density distributions around the analysed airless objects. For that, an additional emission model is necessary, which determines how many dust particles are emitted at various places on the analysed (v0, m/q)-phase space. However, this study identifies phase space regions where the electric field within the Debye layer plays an important role for dust particles. Overall, the initial results indicate that when a realistic dust emission model is available, the unified lunar based DPEM modelling suite is a powerful tool to study globally and locally the dust environments of airless bodies such as planetary moons, Mercury, asteroids and non-active comets far from the Sun.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George; Carlson, Jan-Renee; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the de- scribed dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
NASA Technical Reports Server (NTRS)
Connolly, Joe; Carlson, Jan-Renee; Kopasakis, George; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the described dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
Review and Study of Physics Driven Pitting Corrosion Modeling in 2024-T3 Aluminum Alloys (Postprint)
2015-05-01
AFRL-RX-WP-JA-2015-0218 REVIEW AND STUDY OF PHYSICS DRIVEN PITTING CORROSION MODELING IN 2024-T3 ALUMINUM ALLOYS (POSTPRINT) Lingyu...2014 – 1 April 2015 4. TITLE AND SUBTITLE REVIEW AND STUDY OF PHYSICS DRIVEN PITTING CORROSION MODELING IN 2024-T3 ALUMINUM ALLOYS (POSTPRINT) 5a...18 Review and Study of Physics Driven Pitting Corrosion Modeling in 2024-T3 Aluminum Alloys Lingyu Yu 1*, Kumar V. Jata2 1Mechanical Engineering
A Neurobehavioral Study of Rats Using a Model Perfluorinated Acid, NDFDA.
1982-07-13
1 AD-Alla 560 DISTICT OFVCOLUMBIAUUNIV WASHIrGTON, DEPT OF BIOLOGY F662 NEUROEHA IORAL STUDY OF RATS USING A MODEL PERFLUORINATEO AC--ETCU U JUL A2 I...S. TYPE OF REPORT & PERIOD COVERED A NEUROBEHAVIORAL STUDY OF RATS USING A MODEL Final Report PERFLUORINATED ACID, NDFDA ___nu; 6. PERFORMING O IG...OPT0 Y&’ A@E(ben Des Btntete AFOSR-TR" FINAL RPORT A NEUROBUEIAVIORAL STUDY ON RATS USING A MODEL PERFLUORINATED ACID , NDFDA Prepared by: Inez R
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-01-20
Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-02-01
Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Stichting European Society for Clinical Investigation Journal Foundation.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-06
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Reitsma, Johannes B.; Altman, Douglas G.; Moons, Karel G.M.
2015-01-01
Background— Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. Methods— The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. Results— The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. Conclusions— To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25561516
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-01-01
Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). PMID:25562432
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-02-01
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Royal College of Obstetricians and Gynaecologists.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-13
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 The Authors.
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-01-06
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-02-01
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). Copyright © 2015 Elsevier Inc. All rights reserved.
Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.
1993-01-01
Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.
Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin
2014-01-01
Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Cheng
Here, we consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. Furthermore, a chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Urano, K; Tamaoki, N; Nomura, T
2012-01-01
Transgenic animal models have been used in small numbers in gene function studies in vivo for a period of time, but more recently, the use of a single transgenic animal model has been approved as a second species, 6-month alternative (to the routine 2-year, 2-animal model) used in short-term carcinogenicity studies for generating regulatory application data of new drugs. This article addresses many of the issues associated with the creation and use of one of these transgenic models, the rasH2 mouse, for regulatory science. The discussion includes strategies for mass producing mice with the same stable phenotype, including constructing the transgene, choosing a founder mouse, and controlling both the transgene and background genes; strategies for developing the model for regulatory science, including measurements of carcinogen susceptibility, stability of a large-scale production system, and monitoring for uniform carcinogenicity responses; and finally, efficient use of the transgenic animal model on study. Approximately 20% of mouse carcinogenicity studies for new drug applications in the United States currently use transgenic models, typically the rasH2 mouse. The rasH2 mouse could contribute to animal welfare by reducing the numbers of animals used as well as reducing the cost of carcinogenicity studies. A better understanding of the advantages and disadvantages of the transgenic rasH2 mouse will result in greater and more efficient use of this animal model in the future.
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
Experimental study of a generic high-speed civil transport
NASA Technical Reports Server (NTRS)
Belton, Pamela S.; Campbell, Richard L.
1992-01-01
An experimental study of generic high-speed civil transport was conducted in the NASA Langley 8-ft Transonic Pressure Tunnel. The data base was obtained for the purpose of assessing the accuracy of various levels of computational analysis. Two models differing only in wingtip geometry were tested with and without flow-through nacelles. The baseline model has a curved or crescent wingtip shape, while the second model has a more conventional straight wingtip shape. The study was conducted at Mach numbers from 0.30 to 1.19. Force data were obtained on both the straight wingtip model and the curved wingtip model. Only the curved wingtip model was instrumented for measuring pressures. Selected longitudinal, lateral, and directional data are presented for both models. Selected pressure distributions for the curved wingtip model are also presented.
Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.
Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H
2012-09-26
Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.
Iwelunmor, Juliet; Newsome, Valerie; Airhihenbuwa, Collins O.
2015-01-01
Objective This paper reviews available studies that applied the PEN-3 cultural model to address the impact of culture on health behaviors. Methods We search electronic databases and conducted a thematic analysis of empirical studies that applied the PEN-3 cultural model to address the impact of culture on health behaviors. Studies were mapped to describe their methods, target population and the health behaviors or health outcomes studied. Forty-five studies met the inclusion criteria. Results The studies reviewed used the PEN-3 model as a theoretical framework to centralize culture in the study of health behaviors and to integrate culturally relevant factors in the development of interventions. The model was also used as an analysis tool, to sift through text and data in order to separate, define and delineate emerging themes. PEN-3 model was also significant with exploring not only how cultural context shapes health beliefs and practices, but also how family systems play a critical role in enabling or nurturing positive health behaviors and health outcomes. Finally, the studies reviewed highlighted the utility of the model with examining cultural practices that are critical to positive health behaviors, unique practices that have a neutral impact on health and the negative factors that are likely to have an adverse influence on health. Discussion The limitations of model and the role for future studies are discussed relative to the importance of using PEN-3 cultural model to explore the influence of culture in promoting positive health behaviors, eliminating health disparities and designing and implementing sustainable public health interventions. PMID:24266638
ASTP ranging system mathematical model
NASA Technical Reports Server (NTRS)
Ellis, M. R.; Robinson, L. H.
1973-01-01
A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Manoharan, Prabu; Chennoju, Kiranmai; Ghoshal, Nanda
2015-07-01
BACE1 is an attractive target in Alzheimer's disease (AD) treatment. A rational drug design effort for the inhibition of BACE1 is actively pursued by researchers in both academic and pharmaceutical industries. This continued effort led to the steady accumulation of BACE1 crystal structures, co-complexed with different classes of inhibitors. This wealth of information is used in this study to develop target specific proteochemometric models and these models are exploited for predicting the prospective BACE1 inhibitors. The models developed in this study have performed excellently in predicting the computationally generated poses, separately obtained from single and ensemble docking approaches. The simple protein-ligand contact (SPLC) model outperforms other sophisticated high end models, in virtual screening performance, developed during this study. In an attempt to account for BACE1 protein active site flexibility information in predictive models, we included the change in the area of solvent accessible surface and the change in the volume of solvent accessible surface in our models. The ensemble and single receptor docking results obtained from this study indicate that the structural water mediated interactions improve the virtual screening results. Also, these waters are essential for recapitulating bioactive conformation during docking study. The proteochemometric models developed in this study can be used for the prediction of BACE1 inhibitors, during the early stage of AD drug discovery.
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
Teacher Evaluation Models: Compliance or Growth Oriented?
ERIC Educational Resources Information Center
Clenchy, Kelly R.
2017-01-01
This research study reviewed literature specific to the evolution of teacher evaluation models and explored the effectiveness of standards-based evaluation models' potential to facilitate professional growth. The researcher employed descriptive phenomenology to conduct a study of teachers' perceptions of a standard-based evaluation model's…
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Adapting the concept of explanatory models of illness to the study of youth violence.
Biering, Páll
2007-07-01
This study explores the feasibility of adapting Kleinman's concept of explanatory models of illness to the study of youth violence and is conducted within the hermeneutic tradition. Data were collected by interviewing 11 violent adolescents, their parents, and their caregivers. Four types of explanatory models representing the adolescent girls', the adolescent boys', the caregivers', and the parents' understanding of youth violence are found; they correspond sufficiently to Kleinman's concept and establish the feasibility of adapting it to the study of youth violence. The developmental nature of the parents' and adolescents' models makes it feasible to study them by means of hermeneutic methodology. There are some clinically significant discrepancies between the caregivers' and the clients' explanatory models; identifying such discrepancies is an essential step in the process of breaking down barriers to therapeutic communications. Violent adolescents should be encouraged to define their own explanatory models of violence through dialogue with their caregivers.
History, ethics, advantages and limitations of experimental models for hepatic ablation.
Ong, Seok Ling; Gravante, Gianpiero; Metcalfe, Matthew S; Dennison, Ashley R
2013-01-14
Numerous techniques developed in medicine require careful evaluation to determine their indications, limitations and potential side effects prior to their clinical use. At present this generally involves the use of animal models which is undesirable from an ethical standpoint, requires complex and time-consuming authorization, and is very expensive. This process is exemplified in the development of hepatic ablation techniques, starting experiments on explanted livers and progressing to safety and efficacy studies in living animals prior to clinical studies. The two main approaches used are ex vivo isolated non-perfused liver models and in vivo animal models. Ex vivo non perfused models are less expensive, easier to obtain but not suitable to study the heat sink effect or experiments requiring several hours. In vivo animal models closely resemble clinical subjects but often are expensive and have small sample sizes due to ethical guidelines. Isolated perfused ex vivo liver models have been used to study drug toxicity, liver failure, organ transplantation and hepatic ablation and combine advantages of both previous models.
Ahn, Jaeil; Mukherjee, Bhramar; Banerjee, Mousumi; Cooney, Kathleen A.
2011-01-01
Summary The stereotype regression model for categorical outcomes, proposed by Anderson (1984) is nested between the baseline category logits and adjacent category logits model with proportional odds structure. The stereotype model is more parsimonious than the ordinary baseline-category (or multinomial logistic) model due to a product representation of the log odds-ratios in terms of a common parameter corresponding to each predictor and category specific scores. The model could be used for both ordered and unordered outcomes. For ordered outcomes, the stereotype model allows more flexibility than the popular proportional odds model in capturing highly subjective ordinal scaling which does not result from categorization of a single latent variable, but are inherently multidimensional in nature. As pointed out by Greenland (1994), an additional advantage of the stereotype model is that it provides unbiased and valid inference under outcome-stratified sampling as in case-control studies. In addition, for matched case-control studies, the stereotype model is amenable to classical conditional likelihood principle, whereas there is no reduction due to sufficiency under the proportional odds model. In spite of these attractive features, the model has been applied less, as there are issues with maximum likelihood estimation and likelihood based testing approaches due to non-linearity and lack of identifiability of the parameters. We present comprehensive Bayesian inference and model comparison procedure for this class of models as an alternative to the classical frequentist approach. We illustrate our methodology by analyzing data from The Flint Men’s Health Study, a case-control study of prostate cancer in African-American men aged 40 to 79 years. We use clinical staging of prostate cancer in terms of Tumors, Nodes and Metastatsis (TNM) as the categorical response of interest. PMID:19731262
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Application of adobe flash media to optimize jigsaw learning model on geometry material
NASA Astrophysics Data System (ADS)
Imam, P.; Imam, S.; Ikrar, P.
2018-05-01
This study aims to determine and describe the effectiveness of the application of adobe flash media for jigsaw learning model on geometry material. In this study, the modified jigsaw learning with adobe flash media is called jigsaw-flash model. This research was conducted in Surakarta. The research method used is mix method research with exploratory sequential strategy. The results of this study indicate that students feel more comfortable and interested in studying geometry material taught by jigsaw-flash model. In addition, students taught using the jigsaw-flash model are more active and motivated than the students who were taught using ordinary jigsaw models. This shows that the use of the jigsaw-flash model can increase student participation and motivation. It can be concluded that the adobe flash media can be used as a solution to reduce the level of student abstraction in learning mathematics.
On the accuracy of models for predicting sound propagation in fitted rooms.
Hodgson, M
1990-08-01
The objective of this article is to make a contribution to the evaluation of the accuracy and applicability of models for predicting the sound propagation in fitted rooms such as factories, classrooms, and offices. The models studied are 1:50 scale models; the method-of-image models of Jovicic, Lindqvist, Hodgson, Kurze, and of Lemire and Nicolas; the emprical formula of Friberg; and Ondet and Barbry's ray-tracing model. Sound propagation predictions by the analytic models are compared with the results of sound propagation measurements in a 1:50 scale model and in a warehouse, both containing various densities of approximately isotropically distributed, rectangular-parallelepipedic fittings. The results indicate that the models of Friberg and of Lemire and Nicolas are fundamentally incorrect. While more generally applicable versions exist, the versions of the models of Jovicic and Kurze studied here are found to be of limited applicability since they ignore vertical-wall reflections. The Hodgson and Lindqvist models appear to be accurate in certain limited cases. This preliminary study found the ray-tracing model of Ondet and Barbry to be the most accurate of all the cases studied. Furthermore, it has the necessary flexibility with respect to room geometry, surface-absorption distribution, and fitting distribution. It appears to be the model with the greatest applicability to fitted-room sound propagation prediction.
NASA Astrophysics Data System (ADS)
Saleh, H.; Suryadi, D.; Dahlan, J. A.
2018-01-01
The aim of this research was to find out whether 7E learning cycle under hypnoteaching model can enhance students’ mathematical problem-solving skill. This research was quasi-experimental study. The design of this study was pretest-posttest control group design. There were two groups of sample used in the study. The experimental group was given 7E learning cycle under hypnoteaching model, while the control group was given conventional model. The population of this study was the student of mathematics education program at one university in Tangerang. The statistical analysis used to test the hypothesis of this study were t-test and Mann-Whitney U. The result of this study show that: (1) The students’ achievement of mathematical problem solving skill who obtained 7E learning cycle under hypnoteaching model are higher than the students who obtained conventional model; (2) There are differences in the students’ enhancement of mathematical problem-solving skill based on students’ prior mathematical knowledge (PMK) category (high, middle, and low).
Collins, Gary S; Reitsma, Johannes B; Altman, Douglas G; Moons, Karel G M
2015-06-01
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
A Disability Studies Framework for Policy Activism in Postsecondary Education
ERIC Educational Resources Information Center
Gabel, Susan L.
2010-01-01
This article uses disability studies and the social model of disability as theoretical foundations for policy activism in postsecondary education. The social model is discussed and a model for policy activism is described. A case study of how disability studies and policy activism can be applied is provided utilizing the "3C Project to Provide…
The Effects of Recycling and Response Sensitivity on the Acquisition of Social Studies Concepts.
ERIC Educational Resources Information Center
Ford, Mary Jane; McKinney, C. Warren
1986-01-01
Two studies are reported which investigate the concept learning of 116 sixth graders (study 1) and 107 second graders (study 2) depending on the model of concept presentation. Results showed no difference between the structured Merrill and Tennyson model and adaptations of the model which were responsive to student's questions or recycled missed…
Hilkens, N A; Algra, A; Greving, J P
2016-01-01
ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.
Nevill, Alan M; Cooke, Carlton B
2017-05-01
This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.
Modelling a flows in supply chain with analytical models: Case of a chemical industry
NASA Astrophysics Data System (ADS)
Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said
2016-02-01
This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.
NASA Technical Reports Server (NTRS)
Lee, Young-Hee; Mahrt, L.
2005-01-01
This study evaluates the prediction of heat and moisture fluxes from a new land surface scheme with eddy correlation data collected at the old aspen site during the Boreal Ecosystem-Atmosphere Study (BOREAS) in 1994. The model used in this study couples a multilayer vegetation model with a soil model. Inclusion of organic material in the upper soil layer is required to adequately simulate exchange between the soil and subcanopy air. Comparisons between the model and observations are discussed to reveal model misrepresentation of some aspects of the diurnal variation of subcanopy processes. Evapotranspiration
ERIC Educational Resources Information Center
Bock, Geoffrey; And Others
This segment of the national evaluation study of the Follow Through Planned Variation Model describes each of the 17 models represented in the study and reports the results of analyses of 4 years of student performance data for each model. First a purely descriptive synthesis of findings is presented for each model, with interpretation of the data…
Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A
2017-04-01
Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.
Optimal symmetric flight studies
NASA Technical Reports Server (NTRS)
Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.
1985-01-01
Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Flowers, George T.
1994-01-01
Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.
IT vendor selection model by using structural equation model & analytical hierarchy process
NASA Astrophysics Data System (ADS)
Maitra, Sarit; Dominic, P. D. D.
2012-11-01
Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.
Toward a unified approach to dose-response modeling in ecotoxicology.
Ritz, Christian
2010-01-01
This study reviews dose-response models that are used in ecotoxicology. The focus lies on clarification of differences and similarities between models, and as a side effect, their different guises in ecotoxicology are unravelled. A look at frequently used dose-response models reveals major discrepancies, among other things in naming conventions. Therefore, there is a need for a unified view on dose-response modeling in order to improve the understanding of it and to facilitate communication and comparison of findings across studies, thus realizing its full potential. This study attempts to establish a general framework that encompasses most dose-response models that are of interest to ecotoxicologists in practice. The framework includes commonly used models such as the log-logistic and Weibull models, but also features entire suites of models as found in various guidance documents. An outline on how the proposed framework can be implemented in statistical software systems is also provided.
Factors accounting for youth suicide attempt in Hong Kong: a model building.
Wan, Gloria W Y; Leung, Patrick W L
2010-10-01
This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.
Upadhyay, S K; Mukherjee, Bhaswati; Gupta, Ashutosh
2009-09-01
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum-Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.
A Study on Phase Changes of Heterogeneous Composite Materials
NASA Astrophysics Data System (ADS)
Hirasawa, Yoshio; Saito, Akio; Takegoshi, Eisyun
In this study, a phase change process in heterogeneous composite materials which consist of water and coiled copper wires as conductive solid is investigated by four kinds of typical calculation models : 1) model-1 in which the effective thermal conductivity of the composite material is used, 2) model-2 in which a fin metal acts for many conductive solids, 3) model-3 in which the effective thermal conductivities between nodes are estimated and three-dimensional calculation is performed, 4) model-4 proposed by authors in the previous paper in which effective thermal conductivity is not needed. Consequently, model-1 showed the phase change rate considerably lower than the experimental results. Model-2 gave the larger amount of the phase change rate. Model-3 agreed well with the experiment in the case of small coil diameter and relatively large Vd. Model-4 showed a very well agreement with the experiment in the range of this study.
ERIC Educational Resources Information Center
Al-Dor, Nira
2006-01-01
The objective of this study is to present "The Spiral Model for the Development of Coordination" (SMDC), a learning model that reflects the complexity and possibilities embodied in the learning of movement notation Eshkol-Wachman (EWMN), an Israeli invention. This model constituted the infrastructure for a comprehensive study that examined the…
Application of the IRT and TRT Models to a Reading Comprehension Test
ERIC Educational Resources Information Center
Kim, Weon H.
2017-01-01
The purpose of the present study is to apply the item response theory (IRT) and testlet response theory (TRT) models to a reading comprehension test. This study applied the TRT models and the traditional IRT model to a seventh-grade reading comprehension test (n = 8,815) with eight testlets. These three models were compared to determine the best…
2014-07-01
Unified Theory of Acceptance and Use of Technology, Structuration Model of Technology, UNCLASSIFIED DSTO-TR-2992 UNCLASSIFIED 5 Adaptive...Structuration Theory , Model of Mutual Adaptation, Model of Technology Appropriation, Diffusion/Implementation Model, and Tri-core Model, among others [11...simulation gaming essay/scenario writing genius forecasting role play/acting backcasting swot brainstorming relevance tree/logic chart scenario workshop
Modeling the Inner Magnetosphere: Radiation Belts, Ring Current, and Composition
NASA Technical Reports Server (NTRS)
Glocer, Alex
2011-01-01
The space environment is a complex system defined by regions of differing length scales, characteristic energies, and physical processes. It is often difficult, or impossible, to treat all aspects of the space environment relative to a particular problem with a single model. In our studies, we utilize several models working in tandem to examine this highly interconnected system. The methodology and results will be presented for three focused topics: 1) Rapid radiation belt electron enhancements, 2) Ring current study of Energetic Neutral Atoms (ENAs), Dst, and plasma composition, and 3) Examination of the outflow of ionospheric ions. In the first study, we use a coupled MHD magnetosphere - kinetic radiation belt model to explain recent Akebono/RDM observations of greater than 2.5 MeV radiation belt electron enhancements occurring on timescales of less than a few hours. In the second study, we present initial results of a ring current study using a newly coupled kinetic ring current model with an MHD magnetosphere model. Results of a dst study for four geomagnetic events are shown. Moreover, direct comparison with TWINS ENA images are used to infer the role that composition plays in the ring current. In the final study, we directly model the transport of plasma from the ionosphere to the magnetosphere. We especially focus on the role of photoelectrons and and wave-particle interactions. The modeling methodology for each of these studies will be detailed along with the results.
Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling
Kheirollahpour, Maryam; Shohaimi, Shamarina
2014-01-01
The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI) of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA) was applied to reveal the hidden (secondary) effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model. PMID:25097878
Myeloproliferative Neoplasm Animal Models
Mullally, Ann; Lane, Steven W.; Brumme, Kristina; Ebert, Benjamin L.
2012-01-01
Synopsis Myeloproliferative neoplasm (MPN) animal models accurately re-capitulate human disease in mice and have been an important tool for the study of MPN biology and therapy. Transplantation of BCR-ABL transduced bone marrow cells into irradiated syngeneic mice established the field of MPN animal modeling and the retroviral bone marrow transplantation (BMT) assay has been used extensively since. Genetically engineered MPN animal models have enabled detailed characterization of the effects of specific MPN associated genetic abnormalities on the hematopoietic stem and progenitor cell (HSPC) compartment and xenograft models have allowed the study of primary human MPN-propagating cells in vivo. All models have facilitated the pre-clinical development of MPN therapies. JAK2V617F, the most common molecular abnormality in BCR-ABL negative MPN, has been extensively studied using retroviral, transgenic, knock-in and xenograft models. MPN animal models have also been used to investigate additional genetic lesions found in human MPN and to evaluate the bone marrow microenvironment in these diseases. Finally, several genetic lesions, although not common, somatically mutated drivers of MPN in humans induce a MPN phenotype in mice. Future uses for MPN animal models will include modeling compound genetic lesions in MPN and studying myelofibrotic transformation. PMID:23009938
Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K
2017-05-01
Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to high. However, given the lack of dietary simulation reporting guidelines, future work could refine the quality tool to harmonize consistency in the reporting of subsequent dietary modeling studies. © 2017 American Society for Nutrition.
The Dynamical Behaviors for a Class of Immunogenic Tumor Model with Delay
Muthoni, Mutei Damaris; Pang, Jianhua
2017-01-01
This paper aims at studying the model proposed by Kuznetsov and Taylor in 1994. Inspired by Mayer et al., time delay is introduced in the general model. The dynamic behaviors of this model are studied, which include the existence and stability of the equilibria and Hopf bifurcation of the model with discrete delays. The properties of the bifurcated periodic solutions are studied by using the normal form on the center manifold. Numerical examples and simulations are given to illustrate the bifurcation analysis and the obtained results. PMID:29312457
ERIC Educational Resources Information Center
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
THE NORTH AMERICAN MERCURY MODEL INTER-COMPARISON STUDY (NAMMIS)
This paper describes the North American Mercury Model Inter-comparison Study (NAMMIS). The NAMMIS is an effort to apply atmospheric Hg models in a tightly constrained testing environment with a focus on North America. With each model using the same input data sets for initial co...
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
Hickey, Graeme L; Blackstone, Eugene H
2016-08-01
Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Kirsch, Florian
2016-12-01
Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and the remaining two studies less than $30,000 per life-year gained. Nevertheless, if the reporting and selection of data problems are addressed, then Markov models should provide more reliable information for decision makers, because understanding under what circumstances a DMP is cost-effective is an important determinant of efficient resource allocation. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Animal models for microbicide safety and efficacy testing.
Veazey, Ronald S
2013-07-01
Early studies have cast doubt on the utility of animal models for predicting success or failure of HIV-prevention strategies, but results of multiple human phase 3 microbicide trials, and interrogations into the discrepancies between human and animal model trials, indicate that animal models were, and are, predictive of safety and efficacy of microbicide candidates. Recent studies have shown that topically applied vaginal gels, and oral prophylaxis using single or combination antiretrovirals are indeed effective in preventing sexual HIV transmission in humans, and all of these successes were predicted in animal models. Further, prior discrepancies between animal and human results are finally being deciphered as inadequacies in study design in the model, or quite often, noncompliance in human trials, the latter being increasingly recognized as a major problem in human microbicide trials. Successful microbicide studies in humans have validated results in animal models, and several ongoing studies are further investigating questions of tissue distribution, duration of efficacy, and continued safety with repeated application of these, and other promising microbicide candidates in both murine and nonhuman primate models. Now that we finally have positive correlations with prevention strategies and protection from HIV transmission, we can retrospectively validate animal models for their ability to predict these results, and more importantly, prospectively use these models to select and advance even safer, more effective, and importantly, more durable microbicide candidates into human trials.
External validation of preexisting first trimester preeclampsia prediction models.
Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph
2017-10-01
To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Review of early assessment models of innovative medical technologies.
Fasterholdt, Iben; Krahn, Murray; Kidholm, Kristian; Yderstræde, Knud Bonnet; Pedersen, Kjeld Møller
2017-08-01
Hospitals increasingly make decisions regarding the early development of and investment in technologies, but a formal evaluation model for assisting hospitals early on in assessing the potential of innovative medical technologies is lacking. This article provides an overview of models for early assessment in different health organisations and discusses which models hold most promise for hospital decision makers. A scoping review of published studies between 1996 and 2015 was performed using nine databases. The following information was collected: decision context, decision problem, and a description of the early assessment model. 2362 articles were identified and 12 studies fulfilled the inclusion criteria. An additional 12 studies were identified and included in the review by searching reference lists. The majority of the 24 early assessment studies were variants of traditional cost-effectiveness analysis. Around one fourth of the studies presented an evaluation model with a broader focus than cost-effectiveness. Uncertainty was mostly handled by simple sensitivity or scenario analysis. This review shows that evaluation models using known methods assessing cost-effectiveness are most prevalent in early assessment, but seems ill-suited for early assessment in hospitals. Four models provided some usable elements for the development of a hospital-based model. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rivers, Melissa B.; Wahls, Richard A.
1999-01-01
This paper gives the results of a grid study, a turbulence model study, and a Reynolds number effect study for transonic flows over a high-speed aircraft using the thin-layer, upwind, Navier-Stokes CFL3D code. The four turbulence models evaluated are the algebraic Baldwin-Lomax model with the Degani-Schiff modifications, the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, and Menter's two-equation Shear-Stress-Transport (SST) model. The flow conditions, which correspond to tests performed in the NASA Langley National Transonic Facility (NTF), are a Mach number of 0.90 and a Reynolds number of 30 million based on chord for a range of angle-of-attacks (1 degree to 10 degrees). For the Reynolds number effect study, Reynolds numbers of 10 and 80 million based on chord were also evaluated. Computed forces and surface pressures compare reasonably well with the experimental data for all four of the turbulence models. The Baldwin-Lomax model with the Degani-Schiff modifications and the one-equation Baldwin-Barth model show the best agreement with experiment overall. The Reynolds number effects are evaluated using the Baldwin-Lomax with the Degani-Schiff modifications and the Baldwin-Barth turbulence models. Five angles-of-attack were evaluated for the Reynolds number effect study at three different Reynolds numbers. More work is needed to determine the ability of CFL3D to accurately predict Reynolds number effects.
Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao
2016-04-01
Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.
NASA Astrophysics Data System (ADS)
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7
Soeker, Shaheed
2015-01-01
Traumatic brain injury causes functional limitations that can cause people to struggle to reintegrate in the workplace despite participating in work rehabilitation programmes. The aim of the study was to explore, and describe the experiences of individuals with Traumatic Brain Injury regarding returning to work through the use of the model of occupational self-efficacy. In the study 10 individuals who were diagnosed with a mild to moderate brain injury participated in the study. The research study was positioned within the qualitative paradigm specifically utilizing case study methodology. In order to gather data from the participants, individual interviews and participant observation techniques were used. Two themes emerged from the findings of the study theme one reflected the barriers related to the use of the model (i.e. Theme one: Effective participation in the model is affected by financial assistance). The second theme related to the enabling factors related to the use of the model (i.e. Theme two: A sense of normality). The findings of this study indicated that the Model of Occupational Self Efficacy (MOS) is a useful model to use in retraining the work skills of individual's who sustained a traumatic brain injury. The participants in this study could maintain employment in the open labour market for a period of at least 12 months and it improved their ability to accept their brain injury as well as adapt to their worker roles. The MOS also provides a framework for facilitating community integration.
Pouwels, Xavier Ghislain Léon Victor; Ramaekers, Bram L T; Joore, Manuela A
2017-10-01
To provide an overview of model characteristics and outcomes of model-based economic evaluations concerning chemotherapy and targeted therapy (TT) for metastatic breast cancer (MBC); to assess the quality of the studies; to analyse the association between model characteristics and study quality and outcomes. PubMED and NHS EED were systematically searched. Inclusion criteria were as follows: English or Dutch language, model-based economic evaluation, chemotherapy or TT as intervention, population diagnosed with MBC, published between 2000 and 2014, reporting life years (LY) or quality-adjusted life-year (QALY) and an incremental cost-effectiveness ratio. General characteristics, model characteristics and outcomes of the studies were extracted. Quality of the studies was assessed through a checklist. 24 studies were included, considering 50 comparisons (20 concerning chemotherapy and 30 TT). Seven comparisons were represented in multiple studies. A health state-transition model including the following health states: stable/progression-free disease, progression and death was used in 18 studies. Studies fulfilled on average 14 out of the 26 items of the quality checklist, mostly due to a lack of transparency in reporting. Thirty-one per cent of the incremental net monetary benefit was positive. TT led to higher iQALY gained, and industry-sponsored studies reported more favourable cost-effectiveness outcomes. The development of a disease-specific reference model would improve the transparency and quality of model-based cost-effectiveness assessments for MBC treatments. Incremental health benefits increased over time, but were outweighed by the increased treatment costs. Consequently, increased health benefits led to lower value for money.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
Galleria mellonella larvae as an infection model for group A streptococcus
Loh, Jacelyn MS; Adenwalla, Nazneen; Wiles, Siouxsie; Proft, Thomas
2013-01-01
Group A streptococcus is a strict human pathogen that can cause a wide range of diseases, such as tonsillitis, impetigo, necrotizing fasciitis, toxic shock, and acute rheumatic fever. Modeling human diseases in animals is complicated, and rapid, simple, and cost-effective in vivo models of GAS infection are clearly lacking. Recently, the use of non-mammalian models to model human disease is starting to re-attract attention. Galleria mellonella larvae, also known as wax worms, have been investigated for modeling a number of bacterial pathogens, and have been shown to be a useful model to study pathogenesis of the M3 serotype of GAS. In this study we provide further evidence of the validity of the wax worm model by testing different GAS M-types, as well as investigating the effect of bacterial growth phase and incubation temperature on GAS virulence in this model. In contrast to previous studies, we show that the M-protein, among others, is an important virulence factor that can be effectively modeled in the wax worm. We also highlight the need for a more in-depth investigation of the effects of experimental design and wax worm supply before we can properly vindicate the wax worm model for studying GAS pathogenesis. PMID:23652836
A systematic literature review of open source software quality assessment models.
Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo
2016-01-01
Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.
Brinjikji, Waleed; Ding, Yong H; Kallmes, David F; Kadirvel, Ramanathan
2016-01-01
Summary Pre-clinical studies are important in helping practitioners and device developers improve techniques and tools for endovascular treatment of intracranial aneurysms. Thus, an understanding of the major animal models used in such studies is important. The New Zealand rabbit elastase induced arterial aneurysm of the common carotid artery is one of the most commonly used models in testing the safety and efficacy of new endovascular devices. In this review we discuss 1) various techniques used to create the aneurysm, 2) complications of aneurysm creation, 3) natural history of the arterial aneurysm, 4) histopathologic and hemodynamic features of the aneurysm 5) devices tested using this model and 6) weaknesses of the model. We demonstrate how pre-clinical studies using this model are applied in treatment of intracranial aneurysms in humans. The model has a similar hemodynamic, morphological and histologic characteristics to human aneurysms and demonstrates similar healing responses to coiling as human aneurysms. Despite these strengths however, the model does have many weaknesses including the fact that the model does not emulate the complex inflammatory processes affecting growing and ruptured aneurysms. Furthermore the model’s extracranial location affects its ability to be used in preclinical safety assessments of new devices. We conclude that the rabbit elastase model has characteristics that make it a simple and effective model for preclinical studies on the endovascular treatment of intracranial aneurysms however further work is needed to develop aneurysm models that simulate the histopathologic and morphologic characteristics of growing and ruptured aneurysms. PMID:25904642
Zhang, Y J; Xue, F X; Bai, Z P
2017-03-06
The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.
Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Lu, Shuai; Singh, Ruchi
2011-09-23
Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less
Analysing and controlling the tax evasion dynamics via majority-vote model
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2010-09-01
Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdös-Rényi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhod of the noise critical qc to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.
Kinematic responses and injuries of pedestrian in car-pedestrian collisions
NASA Astrophysics Data System (ADS)
Teng, T. L.; Liang, C. C.; Hsu, C. Y.; Tai, S. F.
2017-10-01
How to protect pedestrians and reduce the collision injury has gradually become the new field of automotive safety research and focus in the world. Many engineering studies have appeared and their purpose is trying to reduce the pedestrian injuries caused by traffic accident. The physical model involving impactor model and full scale pedestrian model are costly when taking the impact test. This study constructs a vehicle-pedestrian collision model by using the MADYMO. To verify the accuracy of the proposed vehicle-pedestrian collision model, the experimental data are used in the pedestrian model test. The proposed model also will be applied to analyze the kinematic responses and injuries of pedestrian in collisions in this study. The modeled results can help assess the pedestrian friendliness of vehicles and assist in the future development of pedestrian friendliness vehicle technologies.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
The guinea pig as an animal model for developmental and reproductive toxicology studies.
Rocca, Meredith S; Wehner, Nancy G
2009-04-01
Regulatory guidelines for developmental and reproductive toxicology (DART) studies require selection of "relevant" animal models as determined by kinetic, pharmacological, and toxicological data. Traditionally, rats, mice, and rabbits are the preferred animal models for these studies. However, for test articles that are pharmacologically inactive in the traditional animal models, the guinea pig may be a viable option. This choice should not be made lightly, as guinea pigs have many disadvantages compared to the traditional species, including limited historical control data, variability in pregnancy rates, small and variable litter size, long gestation, relative maturity at birth, and difficulty in dosing and breeding. This report describes methods for using guinea pigs in DART studies and provides results of positive and negative controls. Standard study designs and animal husbandry methods were modified to allow mating on the postpartum estrus in fertility studies and were used for producing cohorts of pregnant females for developmental studies. A positive control study with the pregnancy-disrupting agent mifepristone resulted in the anticipated failure of embryo implantation and supported the use of the guinea pig model. Control data for reproductive endpoints collected from 5 studies are presented. In cases where the traditional animal models are not relevant, the guinea pig can be used successfully for DART studies. (c) 2009 Wiley-Liss, Inc.
DOT National Transportation Integrated Search
1999-02-12
FAST-TRAC : THIS REPORT DESCRIBES THE CHOICE MODEL STUDY OF THE FAST-TRAC (FASTER AND SAFER TRAVEL THROUGH TRAFFIC ROUTING AND ADVANCED CONTROLS) OPERATIONAL TEST IN SOUTHEAST MICHIGAN. CHOICE MODELING IS A STATED-PREFERENCE APPROACH IN WHICH RESP...
Empathy, Communication, and Prosocial Behavior.
ERIC Educational Resources Information Center
Stiff, James B.; And Others
To explain the role of empathy in forms of prosocial behavior, two studies were conducted to examine the relationships among different dimensions of empathy, communication, and prosocial behavior. Study one tested three models hypothesized to explain this process, an altruistic model, an egoistic model, and a dual-process model combining aspects…
Mental Models: Knowledge in the Head and Knowledge in the World.
ERIC Educational Resources Information Center
Jonassen, David H.; Henning, Philip
1999-01-01
Explores the utility of mental models as learning outcomes in using complex and situated learning environments. Describes two studies: one aimed at eliciting mental models in the heads of novice refrigeration technicians, and the other an ethnographic study eliciting knowledge and models within the community of experienced refrigeration…
Supporting Students' Knowledge Transfer in Modeling Activities
ERIC Educational Resources Information Center
Piksööt, Jaanika; Sarapuu, Tago
2014-01-01
This study investigates ways to enhance secondary school students' knowledge transfer in complex science domains by implementing question prompts. Two samples of students applied two web-based models to study molecular genetics--the model of genetic code (n = 258) and translation (n = 245). For each model, the samples were randomly divided into…
The Development Effectiveness Management Model for Sub-District Secondary School
ERIC Educational Resources Information Center
Butsankom, Akachai; Sirishuthi, Chaiyuth; Lammana, Preeda
2016-01-01
The purposes of this research were to study the factors of effectiveness management model for subdistrict secondary school, to investigate current situations and desirable situations of effectiveness management model for sub-district secondary school, to develop the effectiveness management model for sub-district secondary school and to study the…
Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study
ERIC Educational Resources Information Center
Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa
2012-01-01
This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…
Adequacy Model for School Funding
ERIC Educational Resources Information Center
Banicki, Guy; Murphy, Gregg
2014-01-01
This study considers the effectiveness of the Evidence-Based Adequacy model of school funding. In looking at the Evidence-Based Adequacy model for school funding, one researcher has been centrally associated with the development and study of this model. Allen Odden is currently a professor in the Department of Educational Leadership and Policy…
Watershed modeling applications in south Texas
Pedraza, Diana E.; Ockerman, Darwin J.
2012-01-01
This fact sheet presents an overview of six selected watershed modeling studies by the USGS and partners that address a variety of water-resource issues in south Texas. These studies provide examples of modeling applications and demonstrate the usefulness and versatility of watershed models in aiding the understanding of hydrologic systems.
Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach
ERIC Educational Resources Information Center
Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro
2015-01-01
The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…
Bayesian model reduction and empirical Bayes for group (DCM) studies
Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter
2016-01-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570
A proposed model for economic evaluations of major depressive disorder.
Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi
2012-08-01
In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.
NASA Astrophysics Data System (ADS)
McGuire, A. D.
2016-12-01
The Model Integration Group of the Permafrost Carbon Network (see http://www.permafrostcarbon.org/) has conducted studies to evaluate the sensitivity of offline terrestrial permafrost and carbon models to both historical and projected climate change. These studies indicate that there is a wide range of (1) initial states permafrost extend and carbon stocks simulated by these models and (2) responses of permafrost extent and carbon stocks to both historical and projected climate change. In this study, we synthesize what has been learned about the variability in initial states among models and the driving factors that contribute to variability in the sensitivity of responses. We conclude the talk with a discussion of efforts needed by (1) the modeling community to standardize structural representation of permafrost and carbon dynamics among models that are used to evaluate the permafrost carbon feedback and (2) the modeling and observational communities to jointly develop data sets and methodologies to more effectively benchmark models.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
An Investigation on the Sensitivity of the Parameters of Urban Flood Model
NASA Astrophysics Data System (ADS)
M, A. B.; Lohani, B.; Jain, A.
2015-12-01
Global climatic change has triggered weather patterns which lead to heavy and sudden rainfall in different parts of world. The impact of heavy rainfall is severe especially on urban areas in the form of urban flooding. In order to understand the effect of heavy rainfall induced flooding, it is necessary to model the entire flooding scenario more accurately, which is now becoming possible with the availability of high resolution airborne LiDAR data and other real time observations. However, there is not much understanding on the optimal use of these data and on the effect of other parameters on the performance of the flood model. This study aims at developing understanding on these issues. In view of the above discussion, the aim of this study is to (i) understand that how the use of high resolution LiDAR data improves the performance of urban flood model, and (ii) understand the sensitivity of various hydrological parameters on urban flood modelling. In this study, modelling of flooding in urban areas due to heavy rainfall is carried out considering Indian Institute of Technology (IIT) Kanpur, India as the study site. The existing model MIKE FLOOD, which is accepted by Federal Emergency Management Agency (FEMA), is used along with the high resolution airborne LiDAR data. Once the model is setup it is made to run by changing the parameters such as resolution of Digital Surface Model (DSM), manning's roughness, initial losses, catchment description, concentration time, runoff reduction factor. In order to realize this, the results obtained from the model are compared with the field observations. The parametric study carried out in this work demonstrates that the selection of catchment description plays a very important role in urban flood modelling. Results also show the significant impact of resolution of DSM, initial losses and concentration time on urban flood model. This study will help in understanding the effect of various parameters that should be part of a flood model for its accurate performance.
Peñaloza-Ramos, Maria Cristina; Jowett, Sue; Sutton, Andrew John; McManus, Richard J; Barton, Pelham
2018-03-01
Management of hypertension can lead to significant reductions in blood pressure, thereby reducing the risk of cardiovascular disease. Modeling the course of cardiovascular disease is not without complications, and uncertainty surrounding the structure of a model will almost always arise once a choice of a model structure is defined. To provide a practical illustration of the impact on the results of cost-effectiveness of changing or adapting model structures in a previously published cost-utility analysis of a primary care intervention for the management of hypertension Targets and Self-Management for the Control of Blood Pressure in Stroke and at Risk Groups (TASMIN-SR). The case study assessed the structural uncertainty arising from model structure and from the exclusion of secondary events. Four alternative model structures were implemented. Long-term cost-effectiveness was estimated and the results compared with those from the TASMIN-SR model. The main cost-effectiveness results obtained in the TASMIN-SR study did not change with the implementation of alternative model structures. Choice of model type was limited to a cohort Markov model, and because of the lack of epidemiological data, only model 4 captured structural uncertainty arising from the exclusion of secondary events in the case study model. The results of this study indicate that the main conclusions drawn from the TASMIN-SR model of cost-effectiveness were robust to changes in model structure and the inclusion of secondary events. Even though one of the models produced results that were different to those of TASMIN-SR, the fact that the main conclusions were identical suggests that a more parsimonious model may have sufficed. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
[Risk factors of eating disorders in the narratives of fashion models].
Bogár, Nikolett; Túry, Ferenc
2017-01-01
The risk of eating disorders is high in populations who are exposed to slimness ideal, so among fashion models. The present qualitative study evaluates the risk factors of eating disorders in a group of fashion models with semistructured interview. Moreover, the aim of the study was to examine the impact of professional requirements on the health of models. The study group was internationally heterogeneous. The models were involved by personal professional relationship. A semistructured questionnaire was used by e-mail containing anthropometric data and different aspects of the model profession. 29 female and three male models, three agents, two designers, three fotographers, one personal trainer and one stylist answered the questionnaire. Transient bulimic symptoms were reported by six female models (21%). Moreover, five female models fulfilled the DSM-5 criteria of anorexia nervosa or bulimia nervosa. Four of them were anorexic (body mass index: 13.9-15.3), one was bulimic. The symptoms of three persons began before the model career, those of two models after it. 17 models reported that the model profession intensively increased the bodily preoccupations. The study corroborates the effect of the model profession on the increase of the risk for eating disorders. In the case of the models, whose eating disorder began after stepping into the model profession, the role of the representants of the fashion industry can be suggested as a form of psychological abuse. As the models or in the case of underages their parents accepted the strong requirement of slimness, an unconscious collusion is probable. Our date highlight the health impact of cultural ideals, and call the attention to prevention strategies.
Si, L; Winzenberg, T M; Palmer, A J
2014-01-01
This review was aimed at the evolution of health economic models used in evaluations of clinical approaches aimed at preventing osteoporotic fractures. Models have improved, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes, as well as advancements in epidemiological data. Model-based health economic evaluation studies are increasingly used to investigate the cost-effectiveness of osteoporotic fracture preventions and treatments. The objective of this study was to carry out a systematic review of the evolution of health economic models used in the evaluation of osteoporotic fracture preventions. Electronic searches within MEDLINE and EMBASE were carried out using a predefined search strategy. Inclusion and exclusion criteria were used to select relevant studies. References listed of included studies were searched to identify any potential study that was not captured in our electronic search. Data on country, interventions, type of fracture prevention, evaluation perspective, type of model, time horizon, fracture sites, expressed costs, types of costs included, and effectiveness measurement were extracted. Seventy-four models were described in 104 publications, of which 69% were European. Earlier models focused mainly on hip, vertebral, and wrist fracture, but later models included multiple fracture sites (humerus, pelvis, tibia, and other fractures). Modeling techniques have evolved from simple decision trees, through deterministic Markov processes to individual patient simulation models accounting for uncertainty in multiple parameters. Treatment continuance has been increasingly taken into account in the models in the last decade. Models have evolved in their complexity and emphasis, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes. This evolution may be driven in part by the desire to capture all the important differentiating characteristics of medications under scrutiny, as well as the advancement in epidemiological data relevant to osteoporosis fractures.
Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J
2006-11-01
The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Boehler, Christian E H; Lord, Joanne
2016-01-01
Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%-19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. © The Author(s) 2015.
The Value of SysML Modeling During System Operations: A Case Study
NASA Technical Reports Server (NTRS)
Dutenhoffer, Chelsea; Tirona, Joseph
2013-01-01
System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.
The value of SysML modeling during system operations: A case study
NASA Astrophysics Data System (ADS)
Dutenhoffer, C.; Tirona, J.
System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.
NASA Astrophysics Data System (ADS)
Park, Eun Jung
The nature of matter based upon atomic theory is a principal concept in science; hence, how to teach and how to learn about atoms is an important subject for science education. To this end, this study explored student perceptions of atomic structure and how students learn about this concept by analyzing student mental models of atomic structure. Changes in student mental models serve as a valuable resource for comprehending student conceptual development. Data was collected from students who were taking the introductory chemistry course. Responses to course examinations, pre- and post-questionnaires, and pre- and post-interviews were used to analyze student mental models of atomic structure. First, this study reveals that conceptual development can be achieved, either by elevating mental models toward higher levels of understanding or by developing a single mental model. This study reinforces the importance of higher-order thinking skills to enable students to relate concepts in order to construct a target model of atomic structure. Second, Bohr's orbital structure seems to have had a strong influence on student perceptions of atomic structure. With regard to this finding, this study suggests that it is instructionally important to teach the concept of "orbitals" related to "quantum theory." Third, there were relatively few students who had developed understanding at the level of the target model, which required student understanding of the basic ideas of quantum theory. This study suggests that the understanding of atomic structure based on the idea of quantum theory is both important and difficult. Fourth, this study included different student assessments comprised of course examinations, questionnaires, and interviews. Each assessment can be used to gather information to map out student mental models. Fifth, in the comparison of the pre- and post-interview responses, this study showed that high achieving students moved toward more improved models or to advanced levels of understanding. The analysis of mental models in this study has provided information describing student understanding of the nature and structure of an atom. In addition to an assessment of student cognition, information produced from this study can serve as an important resource for curriculum development, teacher education, and instruction.
White-Means, S I
1995-01-01
There is no consensus on the appropriate conceptualization of race in economic models of health care. This is because race is rarely the primary focus for analysis of the market. This article presents an alternative framework for conceptualizing race in health economic models. A case study is analyzed to illustrate the value of the alternative conceptualization. The case study findings clearly document the importance of model stratification according to race. Moreover, the findings indicate that empirical results are improved when medical utilization models are refined in a way that reflects the unique experiences of the population that is studied. PMID:7721593
DOT National Transportation Integrated Search
2013-03-01
This study developed a mesoscopic model for the before and after study of MD 200, the Inter-County Connector. It is in line with : recent efforts by the Maryland State Highway Administration (SHA) in developing effective modeling tools for traffic an...
An Integrated Approach to Mathematical Modeling: A Classroom Study.
ERIC Educational Resources Information Center
Doerr, Helen M.
Modeling, simulation, and discrete mathematics have all been identified by professional mathematics education organizations as important areas for secondary school study. This classroom study focused on the components and tools for modeling and how students use these tools to construct their understanding of contextual problems in the content area…
A Conceptual Model for Leadership Transition
ERIC Educational Resources Information Center
Manderscheid, Steven V.; Ardichvili, Alexandre
2008-01-01
The purpose of this study was to develop a model of leadership transition based on an integrative review of literature. The article establishes a compelling case for focusing on leadership transitions as an area for study and leadership development practitioner intervention. The proposed model in this study identifies important success factors…
A Multivariate Model of Conceptual Change
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Heddy, Benjamin; Bailey, MarLynn; Farley, John
2016-01-01
The present study used the Cognitive Reconstruction of Knowledge Model (CRKM) model of conceptual change as a framework for developing and testing how key cognitive, motivational, and emotional variables are linked to conceptual change in physics. This study extends an earlier study developed by Taasoobshirazi and Sinatra ("J Res Sci…
1980-01-01
The research project involves building models for 3 selected ESCAP countries, Indonesia, Japan, and the Republic of Korea, which are at different stages of demographic transition. This project involves country level research workd esigned, implemented, and monitored with the assistance of ESCAP. Accordingly the 1st Study Directors' Meeting was held in Bangkok during November 16-30, 1979 in a series of informal interactive working sessions for Study Directors, modelling experts, and resource persons. The participants were Study Directors from the above mentioned countries and a few experts from Malaysia, Thailand, ILO, UNRISD, and IBRD. The main objective of the meeting was to help finance the basic model framework in order that National Study Directors will be able to commence their modelling work after the Meeting. As evidenced by the Report of the 1st Study Directors' Meeting, this objective was achieved. Following this meeting, the 3 case studies are being simultaneously undertaken in countries by national study teams with technical support provided by ESCAP.
NASA Technical Reports Server (NTRS)
Levison, William H.
1988-01-01
This study explored application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues. The model was applied to two data bases: (1) a NASA ground based simulation of an air-to-air tracking task in which nonvisual cueing devices were explored, and (2) a ground based and inflight study performed by the Calspan Corporation to explore the effects of simulator delay on attitude tracking performance. The model predicted the major performance trends obtained in both studies. A combined analytical and experimental procedure for exploring simulator fidelity issues is outlined.
ERIC Educational Resources Information Center
Atalay, Özlem; Kahveci, Nihat Gürel
2015-01-01
This experimental study examines the effects of Integrated Curriculum Model (ICM) on 4th grade elementary gifted and talented students' academic achievement, creativity and critical thinking (Control Group N= 10, Experimental Group N= 11) in the social studies classroom context, in Istanbul, Turkey. Integrated Curriculum Model was utilized to…
ERIC Educational Resources Information Center
Gurl, Theresa
2010-01-01
In response to the recent calls for a residency model for field internships in education, a possible model based on an adaptation of Japanese lesson study is described. Lesson study consists of collaboratively planning, implementing, and discussing lessons after the lesson is taught. Results of a study in which student teachers and cooperating…
American Sign Language/English bilingual model: a longitudinal study of academic growth.
Lange, Cheryl M; Lane-Outlaw, Susan; Lange, William E; Sherwood, Dyan L
2013-10-01
This study examines reading and mathematics academic growth of deaf and hard-of-hearing students instructed through an American Sign Language (ASL)/English bilingual model. The study participants were exposed to the model for a minimum of 4 years. The study participants' academic growth rates were measured using the Northwest Evaluation Association's Measure of Academic Progress assessment and compared with a national-normed group of grade-level peers that consisted primarily of hearing students. The study also compared academic growth for participants by various characteristics such as gender, parents' hearing status, and secondary disability status and examined the academic outcomes for students after a minimum of 4 years of instruction in an ASL/English bilingual model. The findings support the efficacy of the ASL/English bilingual model.
Martijn, Carolien; Sheeran, Paschal; Wesseldijk, Laura W; Merrick, Hannah; Webb, Thomas L; Roefs, Anne; Jansen, Anita
2013-04-01
The present research tested whether an evaluative conditioning intervention makes thin-ideal models less enviable as standards for appearance-based social comparisons (Study 1), and increases body satisfaction (Study 2). Female participants were randomly assigned to intervention versus control conditions in both studies (ns = 66 and 39). Intervention participants learned to associate thin-ideal models with synonyms of fake whereas control participants completed an equivalent task that did not involve learning this association. The dependent variable in Study 1 was an implicit measure of idealization of slim models assessed via a modified Implicit Association Test (IAT). Study 2 used a validated, self-report measure of body satisfaction as the outcome variable. Intervention participants showed significantly less implicit idealization of slim models on the IAT compared to controls (Study 1). In Study 2, participants who undertook the intervention exhibited an increase in body satisfaction scores whereas no such increase was observed for control participants. The present research indicates that it is possible to overcome the characteristic impact of thin-ideal models on women's judgments of their bodies. An evaluative conditioning intervention made it less likely that slim models were perceived as targets to be emulated, and enhanced body satisfaction. 2013 APA, all rights reserved
Metler, Samantha J; Busseri, Michael A
2017-04-01
Subjective well-being (SWB; Diener, 1984) comprises three primary components: life satisfaction (LS), positive affect (PA), and negative affect (NA). Multiple competing conceptualizations of the tripartite structure of SWB have been employed, resulting in widespread ambiguity concerning the definition, operationalization, analysis, and synthesis of SWB-related findings (Busseri & Sadava, 2011). We report two studies evaluating two predominant structural models (as recently identified by Busseri, 2015): a hierarchical model comprising a higher-order latent SWB factor with LS, PA, and NA as indicators; and a causal systems model specifying unidirectional effects of PA and NA on LS. A longitudinal study (N = 452; M age = 18.54; 76.5% female) and a lab-based experiment (N = 195; M age = 20.42 years; 87.6% female; 81.5% Caucasian) were undertaken. Structural models were evaluated with respect to (a) associations among SWB components across time (three months, three years in Study 1; one week in Study 2) and (b) the impact of manipulating the individual SWB components (Study 2). A hierarchical structural model was supported in both studies; conflicting evidence was found for the causal systems model. A hierarchical model provides a robust conceptualization for the tripartite structure of SWB. © 2015 Wiley Periodicals, Inc.
Ni, Ai; Cai, Jianwen
2018-07-01
Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.
Shi, Danni; Vine, Donna F
2012-07-01
To review rodent animal models of polycystic ovary syndrome (PCOS), with a focus on those associated with the metabolic syndrome and cardiovascular disease risk factors. Review. Rodent models of PCOS. Description and comparison of animal models. Comparison of animal models to clinical phenotypes of PCOS. Animals used to study PCOS include rodents, mice, rhesus monkeys, and ewes. Major methods to induce PCOS in these models include subcutaneous injection or implantation of androgens, estrogens, antiprogesterone, letrozole, prenatal exposure to excess androgens, and exposure to constant light. In addition, transgenic mice models and spontaneous PCOS-like rodent models have also been developed. Rodents are the most economical and widely used animals to study PCOS and ovarian dysfunction. The model chosen to study the development of PCOS and other metabolic parameters remains dependent on the specific etiologic hypotheses being investigated. Rodent models have been shown to demonstrate changes in insulin metabolism, with or without induction of hyperandrogenemia, and limited studies have investigated cardiometabolic risk factors for type 2 diabetes and cardiovascular disease. Given the clinical heterogeneity of PCOS, the utilization of different animal models may be the best approach to further our understanding of the pathophysiologic mechanisms associated with the early etiology of PCOS and cardiometabolic risk. Copyright © 2012 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Simplified aeroelastic modeling of horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Wendell, J. H.
1982-01-01
Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.
The ambiguity of drought events, a bottleneck for Amazon forest drought response modelling
NASA Astrophysics Data System (ADS)
De Deurwaerder, Hannes; Verbeeck, Hans; Baker, Timothy; Christoffersen, Bradley; Ciais, Philippe; Galbraith, David; Guimberteau, Matthieu; Kruijt, Bart; Langerwisch, Fanny; Meir, Patrick; Rammig, Anja; Thonicke, Kirsten; Von Randow, Celso; Zhang, Ke
2016-04-01
Considering the important role of the Amazon forest in the global water and carbon cycle, the prognosis of altered hydrological patterns resulting from climate change provides strong incentive for apprehending the direct implications of drought on the vegetation of this ecosystem. Dynamic global vegetation models have the potential of providing a useful tool to study drought impacts on various spatial and temporal scales. This however assumes the models being able to properly represent drought impact mechanisms. But how well do the models succeed in meeting this assumption? Within this study meteorological driver data and model output data of 4 different DGVMs, i.e. ORCHIDEE, JULES, INLAND and LPGmL, are studied. Using the palmer drought severity index (PDSI) and the mean cumulative water deficit (MWD), temporal and spatial representation of drought events are studied in the driver data and are referenced to historical extreme drought events in the Amazon. Subsequently, within the resulting temporal and spatial frame, we studied the drought impact on the above ground biomass (AGB) and gross primary production (GPP) fluxes. Flux tower data, field inventory data and the JUNG data-driven GPP product for the Amazon region are used for validation. Our findings not only suggest that the current state of the studied DGVMs is inadequate in representing Amazon droughts in general, but also highlights strong inter-model differences in drought responses. Using scatterplot-studies and input-output correlations, we provide insight in the origin of these encountered inter-model differences. In addition, we present directives of model development and improvement in scope of Amazon forest drought response modelling.
Yata, Vinod Kumar; Thapa, Arun; Mattaparthi, Venkata Satish Kumar
2015-01-01
Urease (EC 3.5.1.5., urea amidohydrolase) catalyzes the hydrolysis of urea to ammonia and carbon dioxide. Urease is present to a greater abundance in plants and plays significant role related to nitrogen recycling from urea. But little is known about the structure and function of the urease derived from the Arabidopsis thaliana, the model system of choice for research in plant biology. In this study, a three-dimensional structural model of A. thaliana urease was constructed using computer-aided molecular modeling technique. The characteristic structural features of the modeled structure were then studied using atomistic molecular dynamics simulation. It was observed that the modeled structure was stable and regions between residues index (50-80, 500-700) to be significantly flexible. From the docking studies, we detected the possible binding interactions of modeled urease with urea. Ala399, Ile675, Thr398, and Thr679 residues of A. thaliana urease were observed to be significantly involved in binding with the substrate urea. We also compared the docking studies of ureases from other sources such as Canavalia ensiformis, Helicobacter pylori, and Bacillus pasteurii. In addition, we carried out mutation analysis to find the highly mutable amino acid residues of modeled A. thaliana urease. In this particular study, we observed Met485, Tyr510, Ser786, Val426, and Lys765 to be highly mutable amino acids. These results are significant for the mutagenesis analysis. As a whole, this study expounds the salient structural features as well the binding interactions of the modeled structure of A. thaliana urease.
NASA Astrophysics Data System (ADS)
Kumar, Awkash; Patil, Rashmi S.; Dikshit, Anil Kumar; Kumar, Rakesh; Brandt, Jørgen; Hertel, Ole
2016-10-01
The accuracy of the results from an air quality model is governed by the quality of emission and meteorological data inputs in most of the cases. In the present study, two air quality models were applied for inverse modelling to determine the particulate matter emission strengths of urban and regional sources in and around Mumbai in India. The study takes outset in an existing emission inventory for Total Suspended Particulate Matter (TSPM). Since it is known that the available TSPM inventory is uncertain and incomplete, this study will aim for qualifying this inventory through an inverse modelling exercise. For use as input to the air quality models in this study, onsite meteorological data has been generated using the Weather Research Forecasting (WRF) model. The regional background concentration from regional sources is transported in the atmosphere from outside of the study domain. The regional background concentrations of particulate matter were obtained from model calculations with the Danish Eulerian Hemisphere Model (DEHM) for regional sources. The regional background concentrations obtained from DEHM were then used as boundary concentrations in AERMOD calculations of the contribution from local urban sources. The results from the AERMOD calculations were subsequently compared with observed concentrations and emission correction factors obtained by best fit of the model results to the observed concentrations. The study showed that emissions had to be up-scaled by between 14 and 55% in order to fit the observed concentrations; this is of course when assuming that the DEHM model describes the background concentration level of the right magnitude.
ERIC Educational Resources Information Center
Ghufron, M. Ali; Saleh, Mursid; Warsono; Sofwan, Ahmad
2016-01-01
This study aimed at designing a model of instructional materials for Academic Writing Course focusing on research paper writing. The model was designed based on the Curriculum at the English Education Study Program, Faculty of Language and Art Education of IKIP PGRI Bojonegoro, East Java, Indonesia. This model was developed in order to improve…
ERIC Educational Resources Information Center
Denton, Holly M.
A study tested a model of organizational variables that earlier research had identified as important in influencing what model(s) of public relations an organization selects. Models of public relations (as outlined by J. Grunig and Hunt in 1984) are defined as either press agentry, public information, two-way asymmetrical, or two-way symmetrical.…
ERIC Educational Resources Information Center
Asanok, Manit; Chookhampaeng, Chowwalit
2016-01-01
The study aims to develop coaching and mentoring model, study the usage findings in the model and to evaluate the activity management in the model by surveying 100 participant teachers' opinion, under jurisdiction of the office of Mah Sarakham Primary Educational Service Area 1, Thailand. The model consisted of 3 steps and 4 phases including…
ERIC Educational Resources Information Center
Xuan, Yue; Zhang, Zhaoyan
2014-01-01
Purpose: The purpose of this study was to explore the possible structural and material property features that may facilitate complete glottal closure in an otherwise isotropic physical vocal fold model. Method: Seven vocal fold models with different structural features were used in this study. An isotropic model was used as the baseline model, and…
Does Aid to Families with Dependent Children Displace Familial Assistance?
1996-07-01
brief discussion of theoretical models of familial transfers that predict displacement as well as previous empirical studies that have examined this...summarizes the findings. Models of Familial Transfers, and Previous Empirical Studies of Displacement Theoretical Models Several models of private...transfer behavior have been posed, including altruism, exchange, and "warm glow." The altruism model (Becker, 1974; Barro, 1974) states, in terms of
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
Chhatwal, Jagpreet; He, Tianhua; Lopez-Olivo, Maria A
2016-06-01
New direct-acting antivirals (DAAs) are highly effective for hepatitis C virus (HCV) treatment. However, their prices have been widely debated. Decision-analytic models can project the long-term value of HCV treatment. Therefore, understanding of the methods used in these models and how they could influence results is important. Our objective was to describe and systematically review the methodological approaches in published cost-effectiveness models of chronic HCV treatment with DAAs. We searched several electronic databases, including Medline, Embase and EconLit, from 2011 to 2015. Study selection was performed by two reviewers independently. We included any cost-effectiveness analysis comparing DAAs with the old standard of care for HCV treatment. We excluded non-English-language studies and studies not reporting quality-adjusted life-years. One reviewer collected data and assessed the quality of reporting, using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Another reviewer crosschecked the abstracted information. The development methods of the included studies were synthetized on the basis of good modelling practice recommendations. Review of 304 citations revealed 36 cost-effectiveness analyses. The reporting quality scores of most articles were rated as acceptable, between 67 and 100 %. The majority of the studies were conducted in Europe (50 %), followed by the USA (44 %). Fifty-six percent of the 36 studies evaluated the cost effectiveness of HCV treatment in both treatment-naive and treatment-experienced patients, 97 % included genotype 1 patients and 53 % evaluated the cost effectiveness of second-generation or oral DAAs in comparison with the previous standard of care or other DAAs. Twenty-one models defined health states in terms of METAVIR fibrosis scores. Only one study used a discrete-event simulation approach, and the remainder used state-transition models. The time horizons varied; however, 89 % of studies used a lifetime horizon. One study was conducted from a societal perspective. Thirty-three percent of studies did not conduct any model validation. We also noted that none of the studies modelled HCV treatment as a prevention strategy, 86 % of models did not consider the possibility of re-infection with HCV after successful treatment, 97 % of studies did not consider indirect economic benefits resulting from HCV treatment and none of the studies evaluating oral DAAs used real-world data. The search was limited by date (from 1 January 2011 to 8 September 2015) and was also limited to English-language and published reports. Most modelling studies used a similar modelling structure and could have underestimated the value of HCV treatment. Future modelling efforts should consider the benefits of HCV treatment in preventing transmission, extra-hepatic and indirect economic benefits of HCV treatment, real-world cost-effectiveness analysis and cost effectiveness of HCV treatment in low- and middle-income countries.
A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications
NASA Astrophysics Data System (ADS)
Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.
2018-04-01
The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.
Models hosts for the study of oral candidiasis.
Junqueira, Juliana Campos
2012-01-01
Oral candidiasis is an opportunistic infection caused by yeast of the Candida genus, primarily Candida albicans. It is generally associated with predisposing factors such as the use of immunosuppressive agents, antibiotics, prostheses, and xerostomia. The development of research in animal models is extremely important for understanding the nature of the fungal pathogenicity, host interactions, and treatment of oral mucosal Candida infections. Many oral candidiasis models in rats and mice have been developed with antibiotic administration, induction of xerostomia, treatment with immunosuppressive agents, or the use of germ-free animals, and all these models has both benefits and limitations. Over the past decade, invertebrate model hosts, including Galleria mellonella, Caenorhabditis elegans, and Drosophila melanogaster, have been used for the study of Candida pathogenesis. These invertebrate systems offer a number of advantages over mammalian vertebrate models, predominantly because they allow the study of strain collections without the ethical considerations associated with studies in mammals. Thus, the invertebrate models may be useful to understanding of pathogenicity of Candida isolates from the oral cavity, interactions of oral microorganisms, and study of new antifungal compounds for oral candidiasis.
Construction and validation of a three-dimensional finite element model of degenerative scoliosis.
Zheng, Jie; Yang, Yonghong; Lou, Shuliang; Zhang, Dongsheng; Liao, Shenghui
2015-12-24
With the aging of the population, degenerative scoliosis (DS) incidence rate is increasing. In recent years, increasing research on this topic has been carried out, yet biomechanical research on the subject is seldom seen and in vitro biomechanical model of DS nearly cannot be available. The objective of this study was to develop and validate a complete three-dimensional finite element model of DS in order to build the digital platform for further biomechanical study. A 55-year-old female DS patient (Suer Pan, ID number was P141986) was selected for this study. This study was performed in accordance with the ethical standards of Declaration of Helsinki and its amendments and was approved by the local ethics committee (117 hospital of PLA ethics committee). Spiral computed tomography (CT) scanning was conducted on the patient's lumbar spine from the T12 to S1. CT images were then imported into a finite element modeling system. A three-dimensional solid model was then formed from segmentation of the CT scan. The three-dimensional model of each vertebra was then meshed, and material properties were assigned to each element according to the pathological characteristics of DS. Loads and boundary conditions were then applied in such a manner as to simulate in vitro biomechanical experiments conducted on lumbar segments. The results of the model were then compared with experimental results in order to validate the model. An integral three-dimensional finite element model of DS was built successfully, consisting of 113,682 solid elements, 686 cable elements, 33,329 shell elements, 4968 target elements, 4968 contact elements, totaling 157,635 elements, and 197,374 nodes. The model accurately described the physical features of DS and was geometrically similar to the object of study. The results of analysis with the finite element model agreed closely with in vitro experiments, validating the accuracy of the model. The three-dimensional finite element model of DS built in this study is clear, reliable, and effective for further biomechanical simulation study of DS.
Blood Flow in Idealized Vascular Access for Hemodialysis: A Review of Computational Studies.
Ene-Iordache, Bogdan; Remuzzi, Andrea
2017-09-01
Although our understanding of the failure mechanism of vascular access for hemodialysis has increased substantially, this knowledge has not translated into successful therapies. Despite advances in technology, it is recognized that vascular access is difficult to maintain, due to complications such as intimal hyperplasia. Computational studies have been used to estimate hemodynamic changes induced by vascular access creation. Due to the heterogeneity of patient-specific geometries, and difficulties with obtaining reliable models of access vessels, idealized models were often employed. In this review we analyze the knowledge gained with the use of computational such simplified models. A review of the literature was conducted, considering studies employing a computational fluid dynamics approach to gain insights into the flow field phenotype that develops in idealized models of vascular access. Several important discoveries have originated from idealized model studies, including the detrimental role of disturbed flow and turbulent flow, and the beneficial role of spiral flow in intimal hyperplasia. The general flow phenotype was consistent among studies, but findings were not treated homogeneously since they paralleled achievements in cardiovascular biomechanics which spanned over the last two decades. Computational studies in idealized models are important for studying local blood flow features and evaluating new concepts that may improve the patency of vascular access for hemodialysis. For future studies we strongly recommend numerical modelling targeted at accurately characterizing turbulent flows and multidirectional wall shear disturbances.
Rönn, Minttu M; Wolf, Emory E; Chesson, Harrell; Menzies, Nicolas A; Galer, Kara; Gorwitz, Rachel; Gift, Thomas; Hsu, Katherine; Salomon, Joshua A
2017-05-01
Mathematical models of chlamydia transmission can help inform disease control policy decisions when direct empirical evaluation of alternatives is impractical. We reviewed published chlamydia models to understand the range of approaches used for policy analyses and how the studies have responded to developments in the field. We performed a literature review by searching Medline and Google Scholar (up to October 2015) to identify publications describing dynamic chlamydia transmission models used to address public health policy questions. We extracted information on modeling methodology, interventions, and key findings. We identified 47 publications (including two model comparison studies), which reported collectively on 29 distinct mathematical models. Nine models were individual-based, and 20 were deterministic compartmental models. The earliest studies evaluated the benefits of national-level screening programs and predicted potentially large benefits from increased screening. Subsequent trials and further modeling analyses suggested the impact might have been overestimated. Partner notification has been increasingly evaluated in mathematical modeling, whereas behavioral interventions have received relatively limited attention. Our review provides an overview of chlamydia transmission models and gives a perspective on how mathematical modeling has responded to increasing empirical evidence and addressed policy questions related to prevention of chlamydia infection and sequelae.
Climate and Integrated Assessment Modeling Studies Grant - Closed Announcement FY 2012
Grant to fund a cooperative agreement to benefit the field of economic and integrated assessment modeling related to climate change through regular collaborations and thedevelopment of model comparison studies.
Abolhallaj, Masood; Hosseini, Seyed Mohammadreza; Jafari, Mehdi; Alaei, Fatemeh
2017-01-01
Background: Sukuk is a type of financial instrument backed by balance sheet and physical assets. This applied and descriptive study aimed at providing solutions to the problems faced by insurance companies in the health sector. Methods: In this study, we achieved operational models by reviewing the release nature and mechanism of any of the securities and combining them. Results: According to the model presented in this study, 2 problems could be solved: settling the past debts and avoiding future debts. This model was deigned based on asset backed securities. Conclusion: Utilizing financing instruments (such as Sukuk), creating investment funds, and finding a solution to this problem, this study was conducted in 2 aspects: (1) models that are settling old debts of the organization, and (2) models that prevent debts in the future.
Trojanowicz, Karol; Wójcik, Włodzimierz
2011-01-01
The article presents a case-study on the calibration and verification of mathematical models of organic carbon removal kinetics in biofilm. The chosen Harremöes and Wanner & Reichert models were calibrated with a set of model parameters obtained both during dedicated studies conducted at pilot- and lab-scales for petrochemical wastewater conditions and from the literature. Next, the models were successfully verified through studies carried out utilizing a pilot ASFBBR type bioreactor installed in an oil-refinery wastewater treatment plant. During verification the pilot biofilm reactor worked under varying surface organic loading rates (SOL), dissolved oxygen concentrations and temperatures. The verification proved that the models can be applied in practice to petrochemical wastewater treatment engineering for e.g. biofilm bioreactor dimensioning.
NASA Technical Reports Server (NTRS)
Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)
2000-01-01
The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).
Karst medium characterization and simulation of groundwater flow in Lijiang Riversed, China
NASA Astrophysics Data System (ADS)
Hu, B. X.
2015-12-01
It is important to study water and carbon cycle processes for water resource management, pollution prevention and global warming influence on southwest karst region of China. Lijiang river basin is selected as our study region. Interdisciplinary field and laboratory experiments with various technologies are conducted to characterize the karst aquifers in detail. Key processes in the karst water cycle and carbon cycle are determined. Based on the MODFLOW-CFP model, new watershed flow and carbon cycle models are developed coupled subsurface and surface water flow models, flow and chemical/biological models. Our study is focused on the karst springshed in Mao village. The mechanisms coupling carbon cycle and water cycle are explored. Parallel computing technology is used to construct the numerical model for the carbon cycle and water cycle in the small scale watershed, which are calibrated and verified by field observations. The developed coupling model for the small scale watershed is extended to a large scale watershed considering the scale effect of model parameters and proper model structure simplification. The large scale watershed model is used to study water cycle and carbon cycle in Lijiang rivershed, and to calculate the carbon flux and carbon sinks in the Lijiang river basin. The study results provide scientific methods for water resources management and environmental protection in southwest karst region corresponding to global climate change. This study could provide basic theory and simulation method for geological carbon sequestration in China karst region.
Simulation of groundwater flow and evaluation of carbon sink in Lijiang Rivershed, China
NASA Astrophysics Data System (ADS)
Hu, Bill X.; Cao, Jianhua; Tong, Juxiu; Gao, Bing
2016-04-01
It is important to study water and carbon cycle processes for water resource management, pollution prevention and global warming influence on southwest karst region of China. Lijiang river basin is selected as our study region. Interdisciplinary field and laboratory experiments with various technologies are conducted to characterize the karst aquifers in detail. Key processes in the karst water cycle and carbon cycle are determined. Based on the MODFLOW-CFP model, new watershed flow and carbon cycle models are developed coupled subsurface and surface water flow models, flow and chemical/biological models. Our study is focused on the karst springshed in Mao village. The mechanisms coupling carbon cycle and water cycle are explored. Parallel computing technology is used to construct the numerical model for the carbon cycle and water cycle in the small scale watershed, which are calibrated and verified by field observations. The developed coupling model for the small scale watershed is extended to a large scale watershed considering the scale effect of model parameters and proper model structure simplification. The large scale watershed model is used to study water cycle and carbon cycle in Lijiang rivershed, and to calculate the carbon flux and carbon sinks in the Lijiang river basin. The study results provide scientific methods for water resources management and environmental protection in southwest karst region corresponding to global climate change. This study could provide basic theory and simulation method for geological carbon sequestration in China karst region.
Finite element modeling of a 3D coupled foot-boot model.
Qiu, Tian-Xia; Teo, Ee-Chon; Yan, Ya-Bo; Lei, Wei
2011-12-01
Increasingly, musculoskeletal models of the human body are used as powerful tools to study biological structures. The lower limb, and in particular the foot, is of interest because it is the primary physical interaction between the body and the environment during locomotion. The goal of this paper is to adopt the finite element (FE) modeling and analysis approaches to create a state-of-the-art 3D coupled foot-boot model for future studies on biomechanical investigation of stress injury mechanism, foot wear design and parachute landing fall simulation. In the modeling process, the foot-ankle model with lower leg was developed based on Computed Tomography (CT) images using ScanIP, Surfacer and ANSYS. Then, the boot was represented by assembling the FE models of upper, insole, midsole and outsole built based on the FE model of the foot-ankle, and finally the coupled foot-boot model was generated by putting together the models of the lower limb and boot. In this study, the FE model of foot and ankle was validated during balance standing. There was a good agreement in the overall patterns of predicted and measured plantar pressure distribution published in literature. The coupled foot-boot model will be fully validated in the subsequent works under both static and dynamic loading conditions for further studies on injuries investigation in military and sports, foot wear design and characteristics of parachute landing impact in military. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
Predicting motor vehicle collisions using Bayesian neural network models: an empirical analysis.
Xie, Yuanchang; Lord, Dominique; Zhang, Yunlong
2007-09-01
Statistical models have frequently been used in highway safety studies. They can be utilized for various purposes, including establishing relationships between variables, screening covariates and predicting values. Generalized linear models (GLM) and hierarchical Bayes models (HBM) have been the most common types of model favored by transportation safety analysts. Over the last few years, researchers have proposed the back-propagation neural network (BPNN) model for modeling the phenomenon under study. Compared to GLMs and HBMs, BPNNs have received much less attention in highway safety modeling. The reasons are attributed to the complexity for estimating this kind of model as well as the problem related to "over-fitting" the data. To circumvent the latter problem, some statisticians have proposed the use of Bayesian neural network (BNN) models. These models have been shown to perform better than BPNN models while at the same time reducing the difficulty associated with over-fitting the data. The objective of this study is to evaluate the application of BNN models for predicting motor vehicle crashes. To accomplish this objective, a series of models was estimated using data collected on rural frontage roads in Texas. Three types of models were compared: BPNN, BNN and the negative binomial (NB) regression models. The results of this study show that in general both types of neural network models perform better than the NB regression model in terms of data prediction. Although the BPNN model can occasionally provide better or approximately equivalent prediction performance compared to the BNN model, in most cases its prediction performance is worse than the BNN model. In addition, the data fitting performance of the BPNN model is consistently worse than the BNN model, which suggests that the BNN model has better generalization abilities than the BPNN model and can effectively alleviate the over-fitting problem without significantly compromising the nonlinear approximation ability. The results also show that BNNs could be used for other useful analyses in highway safety, including the development of accident modification factors and for improving the prediction capabilities for evaluating different highway design alternatives.
NASA Astrophysics Data System (ADS)
Skowronek, Sandra; Van De Kerchove, Ruben; Rombouts, Bjorn; Aerts, Raf; Ewald, Michael; Warrie, Jens; Schiefer, Felix; Garzon-Lopez, Carol; Hattab, Tarek; Honnay, Olivier; Lenoir, Jonathan; Rocchini, Duccio; Schmidtlein, Sebastian; Somers, Ben; Feilhauer, Hannes
2018-06-01
Remote sensing is a promising tool for detecting invasive alien plant species. Mapping and monitoring those species requires accurate detection. So far, most studies relied on models that are locally calibrated and validated against available field data. Consequently, detecting invasive alien species at new study areas requires the acquisition of additional field data which can be expensive and time-consuming. Model transfer might thus provide a viable alternative. Here, we mapped the distribution of the invasive alien bryophyte Campylopus introflexus to i) assess the feasibility of spatially transferring locally calibrated models for species detection between four different heathland areas in Germany and Belgium and ii) test the potential of combining calibration data from different sites in one species distribution model (SDM). In a first step, four different SDMs were locally calibrated and validated by combining field data and airborne imaging spectroscopy data with a spatial resolution ranging from 1.8 m to 4 m and a spectral resolution of about 10 nm (244 bands). A one-class classifier, Maxent, which is based on the comparison of probability densities, was used to generate all SDMs. In a second step, each model was transferred to the three other study areas and the performance of the models for predicting C. introflexus occurrences was assessed. Finally, models combining calibration data from three study areas were built and tested on the remaining fourth site. In this step, different combinations of Maxent modelling parameters were tested. For the local models, the area under the curve for a test dataset (test AUC) was between 0.57-0.78, while the test AUC for the single transfer models ranged between 0.45-0.89. For the combined models the test AUC was between 0.54-0.9. The success of transferring models calibrated in one site to another site highly depended on the respective study site; the combined models provided higher test AUC values than the locally calibrated models for three out of four study sites. Furthermore, we also demonstrated the importance of optimizing the Maxent modelling parameters. Overall, our results indicate the potential of a combined model to map C. introflexus without the need for new calibration data.
NASA Astrophysics Data System (ADS)
Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.
2017-12-01
Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes. Sensitivity study of the model indicated that southern and south-west part of the city have shown improvement and small patches of growth are also observed in the north western part of the city.The study highlights the growing importance of socio economic factors and geo-computational modeling approach on changing LULC of newly growing cities of modern India.
Bhuia, Mohammad Romel; Nwaru, Bright I; Weir, Christopher J; Sheikh, Aziz
2017-05-17
Models that have so far been used to estimate and project the prevalence and disease burden of asthma are in most cases inadequately described and irreproducible. We aim systematically to describe and critique the existing models in relation to their strengths, limitations and reproducibility, and to determine the appropriate models for estimating and projecting the prevalence and disease burden of asthma. We will search the following electronic databases to identify relevant literature published from 1980 to 2017: Medline, Embase, WHO Library and Information Services and Web of Science Core Collection. We will identify additional studies by searching the reference list of all the retrieved papers and contacting experts. We will include observational studies that used models for estimating and/or projecting prevalence and disease burden of asthma regarding human population of any age and sex. Two independent reviewers will assess the studies for inclusion and extract data from included papers. Data items will include authors' names, publication year, study aims, data source and time period, study population, asthma outcomes, study methodology, model type, model settings, study variables, methods of model derivation, methods of parameter estimation and/or projection, model fit information, key findings and identified research gaps. A detailed critical narrative synthesis of the models will be undertaken in relation to their strengths, limitations and reproducibility. A quality assessment checklist and scoring framework will be used to determine the appropriate models for estimating and projecting the prevalence anddiseaseburden of asthma. We will not collect any primary data for this review, and hence there is no need for formal National Health Services Research Ethics Committee approval. We will present our findings at scientific conferences and publish the findings in the peer-reviewed scientific journal. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Technical Reports Server (NTRS)
Imel, G.
1977-01-01
The current models of mid-latitude F sub s are studied. The assumptions and derivations of the Reid model, the Scannapieco model, and the Perkins model are presented in detail. Incoherent-scatter data of the density profiles and velocity profiles were obtained in order that the models could be evaluated on the basis of experimental data. Initial studies indicated that the Perkins model was most representative of the data from Arecibo, so a detailed comparison of the predictions of the Perkins model and the data was made. Two of four nights studied are nights with F sub s. The Perkins model is derived in a frame of reference moving with the velocity of the neutral wind; the model is transformed to the rest frame to facilitate comparison with data. Several data handling techniques are introduced. In particular, an integration interval that remains constant in length, but follows the vertical motion of the peak of the F layer is used to obtain the field integrated quantities of the Perkins model.
Employing a Modified Diffuser Momentum Model to Simulate Ventilation of the Orion CEV
NASA Technical Reports Server (NTRS)
Straus, John; Lewis, John F.
2011-01-01
The Ansys CFX CFD modeling tool was used to support the design efforts of the ventilation system for the Orion CEV. CFD modeling was used to establish the flow field within the cabin for several supply configurations. A mesh and turbulence model sensitivity study was performed before the design studies. Results were post-processed for comparison with performance requirements. Most configurations employed straight vaned diffusers to direct and throw the flow. To manage the size of the models, the diffuser vanes were not resolved. Instead, a momentum model was employed to account for the effect of the diffusers. The momentum model was tested against a separate, vane-resolved side study. Results are presented for a single diffuser configuration for a low supply flow case.
Modeling conflict : research methods, quantitative modeling, and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less
A sensitivity analysis of regional and small watershed hydrologic models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.
1975-01-01
Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.
Zebrafish (Danio rerio): A Potential Model for Toxinological Studies.
Vargas, Rafael Antonio; Sarmiento, Karen; Vásquez, Isabel Cristina
2015-10-01
Zebrafish are an emerging basic biomedical research model that has multiple advantages compared with other research models. Given that biotoxins, such as toxins, poisons, and venoms, represent health hazards to animals and humans, a low-cost biological model that is highly sensitive to biotoxins is useful to understand the damage caused by such agents and to develop biological tests to prevent and reduce the risk of poisoning in potential cases of bioterrorism or food contamination. In this article, a narrative review of the general aspects of zebrafish as a model in basic biomedical research and various studies in the field of toxinology that have used zebrafish as a biological model are presented. This information will provide useful material to beginner students and researchers who are interested in developing toxinological studies with the zebrafish model.
An Empirical Study of Enterprise Conceptual Modeling
NASA Astrophysics Data System (ADS)
Anaby-Tavor, Ateret; Amid, David; Fisher, Amit; Ossher, Harold; Bellamy, Rachel; Callery, Matthew; Desmond, Michael; Krasikov, Sophia; Roth, Tova; Simmonds, Ian; de Vries, Jacqueline
Business analysts, business architects, and solution consultants use a variety of practices and methods in their quest to understand business. The resulting work products could end up being transitioned into the formal world of software requirement definitions or as recommendations for all kinds of business activities. We describe an empirical study about the nature of these methods, diagrams, and home-grown conceptual models as reflected in real practice at IBM. We identify the models as artifacts of "enterprise conceptual modeling". We study important features of these models, suggest practical classifications, and discuss their usage. Our survey shows that the "enterprise conceptual modeling" arena presents a variety of descriptive models, each used by a relatively small group of colleagues. Together they form a "long tail" that extends from "drawings" on one end to "standards" on the other.
Modeling the mechanical properties of liver fibrosis in rats.
Zhu, Ying; Chen, Xin; Zhang, Xinyu; Chen, Siping; Shen, Yuanyuan; Song, Liang
2016-06-14
The progression of liver fibrosis changes the biomechanical properties of liver tissue. This study characterized and compared different liver fibrosis stages in rats in terms of viscoelasticity. Three viscoelastic models, the Voigt, Maxwell, and Zener models, were applied to experimental data from rheometer tests and then the elasticity and viscosity were estimated for each fibrosis stage. The study found that both elasticity and viscosity are correlated with the various stages of liver fibrosis. The study revealed that the Zener model is the optimal model for describing the mechanical properties of each fibrosis stage, but there is no significant difference between the Zener and Voigt models in their performance on liver fibrosis staging. Therefore the Voigt model can still be effectively used for liver fibrosis grading. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Kilic, Yasin
2016-11-01
The generalization ability of artificial neural networks (ANNs) and M5 model tree (M5Tree) in modeling reference evapotranspiration ( ET 0 ) is investigated in this study. Daily climatic data, average temperature, solar radiation, wind speed, and relative humidity from six different stations operated by California Irrigation Management Information System (CIMIS) located in two different regions of the USA were used in the applications. King-City Oasis Rd., Arroyo Seco, and Salinas North stations are located in San Joaquin region, and San Luis Obispo, Santa Monica, and Santa Barbara stations are located in the Southern region. In the first part of the study, the ANN and M5Tree models were used for estimating ET 0 of six stations and results were compared with the empirical methods. The ANN and M5Tree models were found to be better than the empirical models. In the second part of the study, the ANN and M5Tree models obtained from one station were tested using the data from the other two stations for each region. ANN models performed better than the CIMIS Penman, Hargreaves, Ritchie, and Turc models in two stations while the M5Tree models generally showed better accuracy than the corresponding empirical models in all stations. In the third part of the study, the ANN and M5Tree models were calibrated using three stations located in San Joaquin region and tested using the data from the other three stations located in the Southern region. Four-input ANN and M5Tree models performed better than the CIMIS Penman in only one station while the two-input ANN models were found to be better than the Hargreaves, Ritchie, and Turc models in two stations.
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
Harman, Rebecca M.; Bussche, Leen; Ledbetter, Eric C.
2014-01-01
ABSTRACT Despite the clinical importance of herpes simplex virus (HSV)-induced ocular disease, the underlying pathophysiology of the disease remains poorly understood, in part due to the lack of adequate virus–natural-host models in which to study the cellular and viral factors involved in acute corneal infection. We developed an air-liquid canine corneal organ culture model and evaluated its susceptibility to canine herpesvirus type 1 (CHV-1) in order to study ocular herpes in a physiologically relevant natural host model. Canine corneas were maintained in culture at an air-liquid interface for up to 25 days, and no degenerative changes were observed in the corneal epithelium during cultivation using histology for morphometric analyses, terminal deoxynucleotidyltransferase-mediated dUTP-biotin nick end labeling (TUNEL) assays, and transmission electron microscopy (TEM). Next, canine corneas were inoculated with CHV-1 for 48 h, and at that time point postinfection, viral plaques could be visualized in the corneal epithelium and viral DNA copies were detected in both the infected corneas and culture supernatants. In addition, we found that canine corneas produced proinflammatory cytokines in response to CHV-1 infection similarly to what has been described for HSV-1. This emphasizes the value of our model as a virus–natural-host model to study ocular herpesvirus infections. IMPORTANCE This study is the first to describe the establishment of an air-liquid canine corneal organ culture model as a useful model to study ocular herpesvirus infections. The advantages of this physiologically relevant model include the fact that (i) it provides a system in which ocular herpes can be studied in a virus–natural-host setting and (ii) it reduces the number of experimental animals needed. In addition, this long-term explant culture model may also facilitate research in other fields where noninfectious and infectious ocular diseases of dogs and humans are being studied. PMID:25231295
Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model
NASA Astrophysics Data System (ADS)
Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus
2017-12-01
The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.
Chest compressions in newborn animal models: A review.
Solevåg, Anne Lee; Cheung, Po-Yin; Lie, Helene; O'Reilly, Megan; Aziz, Khalid; Nakstad, Britt; Schmölzer, Georg Marcus
2015-11-01
Much of the knowledge about the optimal way to perform chest compressions (CC) in newborn infants is derived from animal studies. The objective of this review was to identify studies of CC in newborn term animal models and review the evidence. We also provide an overview of the different models. MEDLINE, EMBASE and CINAHL, until September 29th 2014. Study eligibility criteria and interventions: term newborn animal models where CC was performed. Based on 419 retrieved studies from MEDLINE and 502 from EMBASE, 28 studies were included. No additional studies were identified in CINAHL. Most of the studies were performed in pigs after perinatal transition without long-term follow-up. The models differed widely in methodological aspects, which limits the possibility to compare and synthesize findings. Studies uncommonly reported the method for randomization and allocation concealment, and a limited number were blinded. Only the evidence in favour of the two-thumb encircling hands technique for performing CC, a CC to ventilation ratio of 3:1; and that air can be used for ventilation during CC; was supported by more than one study. Animal studies should be performed and reported with the same rigor as in human randomized trials. Good transitional and survival models are needed to further increase the strength of the evidence derived from animal studies of newborn chest compressions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Developing and validating risk prediction models in an individual participant data meta-analysis
2014-01-01
Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587
Intestinal absorption of hawthorn flavonoids--in vitro, in situ and in vivo correlations.
Zuo, Zhong; Zhang, Li; Zhou, Limin; Chang, Qi; Chow, Moses
2006-11-25
Our previous studies identified hyperoside (HP), isoquercitrin (IQ) and epicatechin (EC) to be the major active flavonoid components of the hawthorn phenolic extract from hawthorn fruits demonstrating inhibitory effect on in vitro Cu(+2)-mediated low density lipoproteins oxidation. Among these three hawthorn flavonoids, EC was the only one detectable in plasma after the oral administration of hawthorn phenolic extract to rats. The present study aims to investigate the intestinal absorption mechanisms of these three hawthorn flavonoids by in vitro Caco-2 monolayer model, rat in situ intestinal perfusion model and in vivo pharmacokinetics studies in rats. In addition, in order to investigate the effect of the co-occurring components in hawthorn phenolic extract on the intestinal absorption of these three major hawthorn flavonoids, intestinal absorption transport profiles of HP, IQ and EC in forms of individual pure compound, mixture of pure compounds and hawthorn phenolic extract were studied and compared. The observations from in vitro Caco-2 monolayer model and in situ intestinal perfusion model indicated that all three studied hawthorn flavonoids have quite limited permeabilities. EC and IQ demonstrated more extensive metabolism in the rat in situ intestinal perfusion model and in vivo study than in Caco-2 monolayer model. Moreover, results from the Caco-2 monolayer model, rat in situ intestinal perfusion model as well as the in vivo pharmacokinetics studies in rats consistently showed that the co-occurring components in hawthorn phenolic extract might not have significant effect on the intestinal absorption of the three major hawthorn flavonoids studied.
Oscillons in a perturbed signum-Gordon model
NASA Astrophysics Data System (ADS)
Klimas, P.; Streibel, J. S.; Wereszczynski, A.; Zakrzewski, W. J.
2018-04-01
We study various properties of a perturbed signum-Gordon model, which has been obtained through the dimensional reduction of the called `first BPS submodel of the Skyrme model'. This study is motivated by the observation that the first BPS submodel of the Skyrme model may be partially responsible for the good qualities of the rational map ansatz approximation to the solutions of the Skyrme model. We investigate the existence, stability and various properties of oscillons and other time-dependent states in this perturbed signum-Gordon model.
GEANT4 benchmark with MCNPX and PHITS for activation of concrete
NASA Astrophysics Data System (ADS)
Tesse, Robin; Stichelbaut, Frédéric; Pauly, Nicolas; Dubus, Alain; Derrien, Jonathan
2018-02-01
The activation of concrete is a real problem from the point of view of waste management. Because of the complexity of the issue, Monte Carlo (MC) codes have become an essential tool to its study. But various codes or even nuclear models exist in MC. MCNPX and PHITS have already been validated for shielding studies but GEANT4 is also a suitable solution. In these codes, different models can be considered for a concrete activation study. The Bertini model is not the best model for spallation while BIC and INCL model agrees well with previous results in literature.
Isolated heart models: cardiovascular system studies and technological advances.
Olejnickova, Veronika; Novakova, Marie; Provaznik, Ivo
2015-07-01
Isolated heart model is a relevant tool for cardiovascular system studies. It represents a highly reproducible model for studying broad spectrum of biochemical, physiological, morphological, and pharmaceutical parameters, including analysis of intrinsic heart mechanics, metabolism, and coronary vascular response. Results obtained in this model are under no influence of other organ systems, plasma concentration of hormones or ions and influence of autonomic nervous system. The review describes various isolated heart models, the modes of heart perfusion, and advantages and limitations of various experimental setups. It reports the improvements of perfusion setup according to Langendorff introduced by the authors.
Mesoscale research activities with the LAMPS model
NASA Technical Reports Server (NTRS)
Kalb, M. W.
1985-01-01
Researchers achieved full implementation of the LAMPS mesoscale model on the Atmospheric Sciences Division computer and derived balanced and real wind initial states for three case studies: March 6, April 24, April 26, 1982. Numerical simulations were performed for three separate studies: (1) a satellite moisture data impact study using Vertical Atmospheric Sounder (VAS) precipitable water as a constraint on model initial state moisture analyses; (2) an evaluation of mesoscale model precipitation simulation accuracy with and without convective parameterization; and (3) the sensitivity of model precipitation to mesoscale detail of moisture and vertical motion in an initial state.
A diagnostic model for studying daytime urban air quality trends
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.
1981-01-01
A single cell Eulerian photochemical air quality simulation model was developed and validated for selected days of the 1976 St. Louis Regional Air Pollution Study (RAPS) data sets; parameterizations of variables in the model and validation studies using the model are discussed. Good agreement was obtained between measured and modeled concentrations of NO, CO, and NO2 for all days simulated. The maximum concentration of O3 was also predicted well. Predicted species concentrations were relatively insensitive to small variations in CO and NOx emissions and to the concentrations of species which are entrained as the mixed layer rises.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Animal models for studying female genital tract infection with Chlamydia trachomatis.
De Clercq, Evelien; Kalmar, Isabelle; Vanrompay, Daisy
2013-09-01
Chlamydia trachomatis is a Gram-negative obligate intracellular bacterial pathogen. It is the leading cause of bacterial sexually transmitted disease in the world, with more than 100 million new cases of genital tract infections with C. trachomatis occurring each year. Animal models are indispensable for the study of C. trachomatis infections and the development and evaluation of candidate vaccines. In this paper, the most commonly used animal models to study female genital tract infections with C. trachomatis will be reviewed, namely, the mouse, guinea pig, and nonhuman primate models. Additionally, we will focus on the more recently developed pig model.
Application of Three Cognitive Diagnosis Models to ESL Reading and Listening Assessments
ERIC Educational Resources Information Center
Lee, Yong-Won; Sawaki, Yasuyo
2009-01-01
The present study investigated the functioning of three psychometric models for cognitive diagnosis--the general diagnostic model, the fusion model, and latent class analysis--when applied to large-scale English as a second language listening and reading comprehension assessments. Data used in this study were scored item responses and incidence…
Why College Students Cheat: A Conceptual Model of Five Factors
ERIC Educational Resources Information Center
Yu, Hongwei; Glanzer, Perry L.; Johnson, Byron R.; Sriram, Rishi; Moore, Brandon
2018-01-01
Though numerous studies have identified factors associated with academic misconduct, few have proposed conceptual models that could make sense of multiple factors. In this study, we used structural equation modeling (SEM) to test a conceptual model of five factors using data from a relatively large sample of 2,503 college students. The results…
Externalising Students' Mental Models through Concept Maps
ERIC Educational Resources Information Center
Chang, Shu-Nu
2007-01-01
The purpose of this study is to use concept maps as an "expressed model" to investigate students' mental models regarding the homeostasis of blood sugar. The difficulties in learning the concept of homeostasis and in probing mental models have been revealed in many studies. Homeostasis of blood sugar is one of the themes in junior high…
The Utility of IRT in Small-Sample Testing Applications.
ERIC Educational Resources Information Center
Sireci, Stephen G.
The utility of modified item response theory (IRT) models in small sample testing applications was studied. The modified IRT models were modifications of the one- and two-parameter logistic models. One-, two-, and three-parameter models were also studied. Test data were from 4 years of a national certification examination for persons desiring…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-19
... results of speciation data analyses, air quality modeling studies, chemical tracer studies, emission... Demonstration 1. Pollutants Addressed 2. Emission Inventory Requirements 3. Modeling 4. Reasonably Available... modeling (40 CFR 51.1007) that is performed in accordance with EPA modeling guidance (EPA-454/B-07-002...
The Community Multiscale Air Quality (CMAQ) / Plume-in-Grid (PinG) model was applied on a domain encompassing the greater Nashville, Tennessee region. Model simulations were performed for selected days in July 1995 during the Southern Oxidant Study (SOS) field study program wh...
Applying the age-shift approach to model responses to midrotation fertilization
Colleen A. Carlson; Thomas R. Fox; H. Lee Allen; Timothy J. Albaugh
2010-01-01
Growth and yield models used to evaluate midrotation fertilization economics require adjustments to account for the typically observed responses. This study investigated the use of age-shift models to predict midrotation fertilizer responses. Age-shift prediction models were constructed from a regional study consisting of 43 installations of a nitrogen (N) by...
Predicting the performance of airborne antennas in the microwave regime
NASA Astrophysics Data System (ADS)
Carroll, David P.
1990-12-01
This study investigated the application of a high-frequency model (Uniform Geometrical Theory of Diffraction) of electromagnetic sources mounted on a curved surface of a complex structure. In particular, the purpose of the study was to determine if the model could be used to predict the radiation patterns of cavity-backed spiral antennas mounted on aircraft fuselages so that the optimum locations for the antennas could be chosen during the aircraft design phase. A review of literature revealed a good deal of work in modeling communications, navigation, identification antennas (blade monopoles and aperture slots) mounted on a wide variety of aircraft fuselages and successful validation against quarter-scale model measurements. This study developed a monopole-array model of a spiral antenna's radiation at vertical polarization and an ellipsoid-plate model of the FB-111A. Using the antenna and aircraft models, the existing Uniform Geometrical Theory of Diffraction model generated radiation patterns which agreed favorably with full-scale measured data. The study includes plots of predicted and measured radiation patterns from 2.5 to 15 Gigahertz.
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.
2016-02-01
Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Assessing groundwater policy with coupled economic-groundwater hydrologic modeling
NASA Astrophysics Data System (ADS)
Mulligan, Kevin B.; Brown, Casey; Yang, Yi-Chen E.; Ahlfeld, David P.
2014-03-01
This study explores groundwater management policies and the effect of modeling assumptions on the projected performance of those policies. The study compares an optimal economic allocation for groundwater use subject to streamflow constraints, achieved by a central planner with perfect foresight, with a uniform tax on groundwater use and a uniform quota on groundwater use. The policies are compared with two modeling approaches, the Optimal Control Model (OCM) and the Multi-Agent System Simulation (MASS). The economic decision models are coupled with a physically based representation of the aquifer using a calibrated MODFLOW groundwater model. The results indicate that uniformly applied policies perform poorly when simulated with more realistic, heterogeneous, myopic, and self-interested agents. In particular, the effects of the physical heterogeneity of the basin and the agents undercut the perceived benefits of policy instruments assessed with simple, single-cell groundwater modeling. This study demonstrates the results of coupling realistic hydrogeology and human behavior models to assess groundwater management policies. The Republican River Basin, which overlies a portion of the Ogallala aquifer in the High Plains of the United States, is used as a case study for this analysis.
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.
Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien
2017-01-01
Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.
Conceptual Models of Depression in Primary Care Patients: A Comparative Study
Karasz, Alison; Garcia, Nerina; Ferri, Lucia
2009-01-01
Conventional psychiatric treatment models are based on a biopsychiatric model of depression. A plausible explanation for low rates of depression treatment utilization among ethnic minorities and the poor is that members of these communities do not share the cultural assumptions underlying the biopsychiatric model. The study examined conceptual models of depression among depressed patients from various ethnic groups, focusing on the degree to which patients’ conceptual models ‘matched’ a biopsychiatric model of depression. The sample included 74 primary care patients from three ethnic groups screening positive for depression. We administered qualitative interviews assessing patients’ conceptual representations of depression. The analysis proceeded in two phases. The first phase involved a strategy called ‘quantitizing’ the qualitative data. A rating scheme was developed and applied to the data by a rater blind to study hypotheses. The data was subjected to statistical analyses. The second phase of the analysis involved the analysis of thematic data using standard qualitative techniques. Study hypotheses were largely supported. The qualitative analysis provided a detailed picture of primary care patients’ conceptual models of depression and suggested interesting directions for future research. PMID:20182550
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.
Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio
2010-03-26
Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
Use of mathematical modelling to assess the impact of vaccines on antibiotic resistance.
Atkins, Katherine E; Lafferty, Erin I; Deeny, Sarah R; Davies, Nicholas G; Robotham, Julie V; Jit, Mark
2018-06-01
Antibiotic resistance is a major global threat to the provision of safe and effective health care. To control antibiotic resistance, vaccines have been proposed as an essential intervention, complementing improvements in diagnostic testing, antibiotic stewardship, and drug pipelines. The decision to introduce or amend vaccination programmes is routinely based on mathematical modelling. However, few mathematical models address the impact of vaccination on antibiotic resistance. We reviewed the literature using PubMed to identify all studies that used an original mathematical model to quantify the impact of a vaccine on antibiotic resistance transmission within a human population. We reviewed the models from the resulting studies in the context of a new framework to elucidate the pathways through which vaccination might impact antibiotic resistance. We identified eight mathematical modelling studies; the state of the literature highlighted important gaps in our understanding. Notably, studies are limited in the range of pathways represented, their geographical scope, and the vaccine-pathogen combinations assessed. Furthermore, to translate model predictions into public health decision making, more work is needed to understand how model structure and parameterisation affects model predictions and how to embed these predictions within economic frameworks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Osteoporotic Animal Models of Bone Healing: Advantages and Pitfalls.
Calciolari, Elena; Donos, Nikolaos; Mardas, Nikos
2017-10-01
The aim of this review was to summarize the advantages and pitfalls of the available osteoporotic animal models of bone healing. A thorough literature search was performed in MEDLINE via OVID and EMBASE to identify animal studies investigating the effect of experimental osteoporosis on bone healing and bone regeneration. The osteotomy model in the proximal tibia is the most popular osseous defect model to study the bone healing process in osteoporotic-like conditions, although other well-characterized models, such as the post-extraction model, might be taken into consideration by future studies. The regenerative potential of osteoporotic bone and its response to biomaterials/regenerative techniques has not been clarified yet, and the critical size defect model might be an appropriate tool to serve this purpose. Since an ideal animal model for simulating osteoporosis does not exist, the type of bone remodeling, the animal lifespan, the age of peak bone mass, and the economic and ethical implications should be considered in our selection process. Furthermore, the influence of animal species, sex, age, and strain on the outcome measurement should be taken into account. In order to make future studies meaningful, standardized international guidelines for osteoporotic animal models of bone healing need to be set up.
Ke, A; Barter, Z; Rowland‐Yeo, K
2016-01-01
In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
Stochastic Technology Choice Model for Consequential Life Cycle Assessment.
Kätelhön, Arne; Bardow, André; Suh, Sangwon
2016-12-06
Discussions on Consequential Life Cycle Assessment (CLCA) have relied largely on partial or general equilibrium models. Such models are useful for integrating market effects into CLCA, but also have well-recognized limitations such as the poor granularity of the sectoral definition and the assumption of perfect oversight by all economic agents. Building on the Rectangular-Choice-of-Technology (RCOT) model, this study proposes a new modeling approach for CLCA, the Technology Choice Model (TCM). In this approach, the RCOT model is adapted for its use in CLCA and extended to incorporate parameter uncertainties and suboptimal decisions due to market imperfections and information asymmetry in a stochastic setting. In a case study on rice production, we demonstrate that the proposed approach allows modeling of complex production technology mixes and their expected environmental outcomes under uncertainty, at a high level of detail. Incorporating the effect of production constraints, uncertainty, and suboptimal decisions by economic agents significantly affects technology mixes and associated greenhouse gas (GHG) emissions of the system under study. The case study also shows the model's ability to determine both the average and marginal environmental impacts of a product in response to changes in the quantity of final demand.
Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment.
Yamaguchi, Kazuhiro; Okada, Kensuke
2018-01-01
A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students' achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed.
Comparison among cognitive diagnostic models for the TIMSS 2007 fourth grade mathematics assessment
Okada, Kensuke
2018-01-01
A variety of cognitive diagnostic models (CDMs) have been developed in recent years to help with the diagnostic assessment and evaluation of students. Each model makes different assumptions about the relationship between students’ achievement and skills, which makes it important to empirically investigate which CDMs better fit the actual data. In this study, we examined this question by comparatively fitting representative CDMs to the Trends in International Mathematics and Science Study (TIMSS) 2007 assessment data across seven countries. The following two major findings emerged. First, in accordance with former studies, CDMs had a better fit than did the item response theory models. Second, main effects models generally had a better fit than other parsimonious or the saturated models. Related to the second finding, the fit of the traditional parsimonious models such as the DINA and DINO models were not optimal. The empirical educational implications of these findings are discussed. PMID:29394257
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
Constructing service-oriented architecture adoption maturity matrix using Kano model
NASA Astrophysics Data System (ADS)
Hamzah, Mohd Hamdi Irwan; Baharom, Fauziah; Mohd, Haslina
2017-10-01
Commonly, organizations adopted Service-Oriented Architecture (SOA) because it can provide a flexible reconfiguration and can reduce the development time and cost. In order to guide the SOA adoption, previous industry and academia have constructed SOA maturity model. However, there is a limited number of works on how to construct the matrix in the previous SOA maturity model. Therefore, this study is going to provide a method that can be used in order to construct the matrix in the SOA maturity model. This study adapts Kano Model to construct the cross evaluation matrix focused on SOA adoption IT and business benefits. This study found that Kano Model can provide a suitable and appropriate method for constructing the cross evaluation matrix in SOA maturity model. Kano model also can be used to plot, organize and better represent the evaluation dimension for evaluating the SOA adoption.
de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul
2012-01-01
Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.
ERIC Educational Resources Information Center
Duran, Erol
2013-01-01
This research is a case study which is a qualitative study model and named as example event as well. The purpose of this research is determining the effect of word repetitive reading method supported with neurological affecting model on fluent reading. In this study, False Analysis Inventory was used in order to determine the student's oral…
Elementary School Students' Mental Models about Formation of Seasons: A Cross Sectional Study
ERIC Educational Resources Information Center
Türk, Cumhur; Kalkan, Hüseyin; Kiroglu, Kasim; Ocak Iskeleli, Nazan
2016-01-01
The purpose of this study is to determine the mental models of elementary school students on seasons and to analyze how these models change in terms of grade levels. The study was conducted with 294 students (5th, 6th, 7th and 8th graders) studying in an elementary school of Turkey's Black Sea Region. Qualitative and quantitative data collection…
Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling
NASA Technical Reports Server (NTRS)
Campbell, Kenyth
2012-01-01
The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.
Morigaki, Kenichi; Tanimoto, Yasushi
2018-03-14
One of the main questions in the membrane biology is the functional roles of membrane heterogeneity and molecular localization. Although segregation and local enrichment of protein/lipid components (rafts) have been extensively studied, the presence and functions of such membrane domains still remain elusive. Along with biochemical, cell observation, and simulation studies, model membranes are emerging as an important tool for understanding the biological membrane, providing quantitative information on the physicochemical properties of membrane proteins and lipids. Segregation of fluid lipid bilayer into liquid-ordered (Lo) and liquid-disordered (Ld) phases has been studied as a simplified model of raft in model membranes, including giant unilamellar vesicles (GUVs), giant plasma membrane vesicles (GPMVs), and supported lipid bilayers (SLB). Partition coefficients of membrane proteins between Lo and Ld phases were measured to gauze their affinities to lipid rafts (raftophilicity). One important development in model membrane is patterned SLB based on the microfabrication technology. Patterned Lo/Ld phases have been applied to study the partition and function of membrane-bound molecules. Quantitative information of individual molecular species attained by model membranes is critical for elucidating the molecular functions in the complex web of molecular interactions. The present review gives a short account of the model membranes developed for studying the lateral heterogeneity, especially focusing on patterned model membranes on solid substrates. Copyright © 2018 Elsevier B.V. All rights reserved.
CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.
Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J
2017-04-01
Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.
Theoretical Study of Diamond-Like Carbons and Nucleation of Diamond
NASA Astrophysics Data System (ADS)
Lee, Choon-Heung
Different forms of amorphous carbon and hydrocarbons with varying elastic and optical properties, hardness, density and hydrogen content exist depending on the preparation technique. The structure can vary from graphitic to diamond -like, i.e., from mainly threefold coordinated to mainly four-fold coordinated. In order to study the properties of such materials, microscopic models must be developed. These studies include the modelling of crosslinked defective graphite, diamond nucleation along the graphite edges, and diamond-like carbons. Tamor's proposed structure for diamondlike carbon consists of crosslinked graphitic regions. We studied a concrete realization of this model in which the cross -links are produced by shortening the interplanar bond lengths. The model study was accomplished with a pure rhombohedral graphite cell. For this study we used a semi-empirical potential based on Tersoff's environment-dependent potential which contains angular terms. It is enhanced by a long-range potential which describes the interplanar interactions. We found a configuration corresponding to a local minimum. More general features such as the randomness of the distribution of cross-links are needed for a realistic model. A model study of diamond/graphite interfaces was motivated by recent observations by Li and Angus. They observed a significant enhancement of diamond nucleation on the graphite edge planes with the preferential orientation relationship: {0001} _{g} | {111 }_{d}, < 1120 >_{g} | < 101>_{d}. Two possible interface structures were studied using the Tersoff potential. We found that the models have comparable low interface energies even if they contain some dangling bonds. Moreover, lower interface energies were found when the dangling bonds of the non-bonded diamond layer were satisfied with hydrogen. We have proposed a growth mechanism based on this study. Finally, we constructed realistic models of dense amorphous carbon. The WWW (introduced earlier for a-Si by Wooten, Winer and Weaire) model was the starting structure. The effects of clustering of the threefold coordinated atoms in pairs, chains, or graphitic (planar hexagonal clusters) were studied. The resulting models were relaxed using the Tersoff potential. Their electronic structures were studied using an empirical tight-binding scheme with parameters adjusted to reproduce the diamond and graphite band-structures. The models were found to have densities of ~ 3 g/cm^3 and bulk moduli of ~3.1 Mbar. Localized dangling bonds and pi - pi^* states were found within the wide gap of the WWW model consistent with optical gaps of the order of 0.5-2 eV. Hydrogen atoms were introduced to remove some of the dangling bonds. The models were found to account for the essential features of ion-beam deposited amorphous carbon and hydrogenated amorphous carbon.
Gilmartin, Heather M; Sousa, Karen H; Battaglia, Catherine
2016-01-01
The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness Refined study. The sample was randomly split into exploration and validation subsets. The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01) and CLABSIs (reflective = -.28; composite = -.25; p = .01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled or with directional ambiguity to increase transparency and bring confidence to study findings.
Tax Evasion and Nonequilibrium Model on Apollonian Networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.
2012-11-01
The Zaklan model had been proposed and studied recently using the equilibrium Ising model on square lattices (SLs) by [G. Zaklan, F. Westerhoff and D. Stauffer, J. Econ. Interact. Coord.4, 1 (2008), arXiv:0801.2980; G. Zaklan, F. W. S. Lima and F. Westerhoff, Physica A387, 5857 (2008)], near the critical temperature of the Ising model presenting a well-defined phase transition; but on normal and modified Apollonian networks (ANs), [J. S. Andrade, Jr., H. J. Herrmann, R. F. S. Andrade, and L. R. da Silva, Phys. Rev. Lett.94, 018702 (2005); R. F. S. Andrade, J. S. Andrade Jr. and H. J. Herrmann, Phys. Rev. E79, 036105 (2009)] studied the equilibrium Ising model. They showed the equilibrium Ising model not to present on ANs a phase transition of the type for the 2D Ising model. Here, using agent-based Monte Carlo simulations, we study the Zaklan model with the well-known majority-vote model (MVM) with noise and apply it to tax evasion on ANs, to show that differently from the Ising model the MVM on ANs presents a well-defined phase transition. To control the tax evasion in the economics model proposed by Zaklan et al., MVM is applied in the neighborhood of the critical noise qc to the Zaklan model. Here we show that the Zaklan model is robust because this can also be studied, besides using equilibrium dynamics of Ising model, through the nonequilibrium MVM and on various topologies giving the same behavior regardless of dynamic or topology used here.
Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine
2016-01-01
Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507
Hens, Niel; Habteab Ghebretinsae, Aklilu; Hardt, Karin; Van Damme, Pierre; Van Herck, Koen
2014-03-14
In this paper, we review the results of existing statistical models of the long-term persistence of hepatitis A vaccine-induced antibodies in light of recently available immunogenicity data from 2 clinical trials (up to 17 years of follow-up). Healthy adult volunteers monitored annually for 17 years after the administration of the first vaccine dose in 2 double-blind, randomized clinical trials were included in this analysis. Vaccination in these studies was administered according to a 2-dose vaccination schedule: 0, 12 months in study A and 0, 6 months in study B (NCT00289757/NCT00291876). Antibodies were measured using an in-house ELISA during the first 11 years of follow-up; a commercially available ELISA was then used up to Year 17 of follow-up. Long-term antibody persistence from studies A and B was estimated using statistical models for longitudinal data. Data from studies A and B were modeled separately. A total of 173 participants in study A and 108 participants in study B were included in the analysis. A linear mixed model with 2 changepoints allowed all available results to be accounted for. Predictions based on this model indicated that 98% (95%CI: 94-100%) of participants in study A and 97% (95%CI: 94-100%) of participants in study B will remain seropositive 25 years after receiving the first vaccine dose. Other models using part of the data provided consistent results: ≥95% of the participants was projected to remain seropositive for ≥25 years. This analysis, using previously used and newly selected model structures, was consistent with former estimates of seropositivity rates ≥95% for at least 25 years. Copyright © 2014 Elsevier Ltd. All rights reserved.
Power of Models in Longitudinal Study: Findings from a Full-Crossed Simulation Design
ERIC Educational Resources Information Center
Fang, Hua; Brooks, Gordon P.; Rizzo, Maria L.; Espy, Kimberly Andrews; Barcikowski, Robert S.
2009-01-01
Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3…
SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION
An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...
Determinants of Linear Judgment: A Meta-Analysis of Lens Model Studies
ERIC Educational Resources Information Center
Karelaia, Natalia; Hogarth, Robin M.
2008-01-01
The mathematical representation of E. Brunswik's (1952) lens model has been used extensively to study human judgment and provides a unique opportunity to conduct a meta-analysis of studies that covers roughly 5 decades. Specifically, the authors analyzed statistics of the "lens model equation" (L. R. Tucker, 1964) associated with 249 different…
A Multilevel Analysis of Phase II of the Louisiana School Effectiveness Study.
ERIC Educational Resources Information Center
Kennedy, Eugene; And Others
This paper presents findings of a study that used conventional modeling strategies (student- and school-level) and a new multilevel modeling strategy, Hierarchical Linear Modeling, to investigate school effects on student-achievement outcomes for data collected as part of Phase 2 of the Louisiana School Effectiveness Study. The purpose was to…
Reviewing Instructional Studies Conducted Using Video Modeling to Children with Autism
ERIC Educational Resources Information Center
Acar, Cimen; Diken, Ibrahim H.
2012-01-01
This study explored 31 instructional research articles written using video modeling to children with autism and published in peer-reviewed journals. The studies in this research have been reached by searching EBSCO, Academic Search Complete, ERIC and other Anadolu University online search engines and using keywords such as "autism, video modeling,…
ERIC Educational Resources Information Center
Karadag, Engin; Kilicoglu, Gökhan; Yilmaz, Derya
2014-01-01
The purpose of this study is to explain constructed theoretical models that organizational cynicism perceptions of primary school teachers affect school culture and academic achievement, by using structural equation modeling. With the assumption that there is a cause-effect relationship between three main variables, the study was constructed with…
Evidence for a General ADHD Factor from a Longitudinal General School Population Study
ERIC Educational Resources Information Center
Normand, Sebastien; Flora, David B.; Toplak, Maggie E.; Tannock, Rosemary
2012-01-01
Recent factor analytic studies in Attention-Deficit/Hyperactivity Disorder (ADHD) have shown that hierarchical models provide a better fit of ADHD symptoms than correlated models. A hierarchical model includes a general ADHD factor and specific factors for inattention, and hyperactivity/impulsivity. The aim of this 12-month longitudinal study was…
The current study uses case studies of model-estimated regional precipitation and wet ion deposition to estimate errors in corresponding regional values derived from the means of site-specific values within regions of interest located in the eastern US. The mean of model-estimate...
Indoor-to-outdoor particle concentration ratio model for human exposure analysis
NASA Astrophysics Data System (ADS)
Lee, Jae Young; Ryu, Sung Hee; Lee, Gwangjae; Bae, Gwi-Nam
2016-02-01
This study presents an indoor-to-outdoor particle concentration ratio (IOR) model for improved estimates of indoor exposure levels. This model is useful in epidemiological studies with large population, because sampling indoor pollutants in all participants' house is often necessary but impractical. As a part of a study examining the association between air pollutants and atopic dermatitis in children, 16 parents agreed to measure the indoor and outdoor PM10 and PM2.5 concentrations at their homes for 48 h. Correlation analysis and multi-step multivariate linear regression analysis was performed to develop the IOR model. Temperature and floor level were found to be powerful predictors of the IOR. Despite the simplicity of the model, it demonstrated high accuracy in terms of the root mean square error (RMSE). Especially for long-term IOR estimations, the RMSE was as low as 0.064 and 0.063 for PM10 and PM2.5, respectively. When using a prediction model in an epidemiological study, understanding the consequence of the modeling error and justifying the use of the model is very important. In the last section, this paper discussed the impact of the modeling error and developed a novel methodology to justify the use of the model.
Some unexamined aspects of analysis of covariance in pretest-posttest studies.
Ganju, Jitendra
2004-09-01
The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Nan; Liu, Xiang-Yang
In this study, recent experimental and modeling studies in nanolayered metal/ceramic composites are reviewed, with focus on the mechanical behaviors of metal/nitrides interfaces. The experimental and modeling studies of the slip systems in bulk TiN are reviewed first. Then, the experimental studies of interfaces, including co-deformation mechanism by micropillar compression tests, in situ TEM straining tests for the dynamic process of the co-deformation, thickness-dependent fracture behavior, and interrelationship among the interfacial bonding, microstructure, and mechanical response, are reviewed for the specific material systems of Al/TiN and Cu/TiN multilayers at nanoscale. The modeling studies reviewed cover first-principles density functional theory-based modeling,more » atomistic molecular dynamics simulations, and mesoscale modeling of nanolayered composites using discrete dislocation dynamics. The phase transformation between zinc-blende and wurtzite AlN phases in Al/AlN multilayers at nanoscale is also reviewed. Finally, a summary and perspective of possible research directions and challenges are given.« less
Animal models of exercise and obesity.
Kasper, Christine E
2013-01-01
Animal models have been invaluable in the conduct of nursing research for the past 40 years. This review will focus on specific animal models that can be used in nursing research to study the physiologic phenomena of exercise and obesity when the use of human subjects is either scientifically premature or inappropriate because of the need for sampling tissue or the conduct of longitudinal studies of aging. There exists an extensive body of literature reporting the experimental use of various animal models, in both exercise science and the study of the mechanisms of obesity. Many of these studies are focused on the molecular and genetic mechanisms of organ system adaptation and plasticity in response to exercise, obesity, or both. However, this review will narrowly focus on the models useful to nursing research in the study of exercise in the clinical context of increasing performance and mobility, atrophy and bedrest, fatigue, and aging. Animal models of obesity focus on those that best approximate clinical pathology.
Li, Nan; Liu, Xiang-Yang
2017-11-03
In this study, recent experimental and modeling studies in nanolayered metal/ceramic composites are reviewed, with focus on the mechanical behaviors of metal/nitrides interfaces. The experimental and modeling studies of the slip systems in bulk TiN are reviewed first. Then, the experimental studies of interfaces, including co-deformation mechanism by micropillar compression tests, in situ TEM straining tests for the dynamic process of the co-deformation, thickness-dependent fracture behavior, and interrelationship among the interfacial bonding, microstructure, and mechanical response, are reviewed for the specific material systems of Al/TiN and Cu/TiN multilayers at nanoscale. The modeling studies reviewed cover first-principles density functional theory-based modeling,more » atomistic molecular dynamics simulations, and mesoscale modeling of nanolayered composites using discrete dislocation dynamics. The phase transformation between zinc-blende and wurtzite AlN phases in Al/AlN multilayers at nanoscale is also reviewed. Finally, a summary and perspective of possible research directions and challenges are given.« less
Unpacking buyer-seller differences in valuation from experience: A cognitive modeling approach.
Pachur, Thorsten; Scheibehenne, Benjamin
2017-12-01
People often indicate a higher price for an object when they own it (i.e., as sellers) than when they do not (i.e., as buyers)-a phenomenon known as the endowment effect. We develop a cognitive modeling approach to formalize, disentangle, and compare alternative psychological accounts (e.g., loss aversion, loss attention, strategic misrepresentation) of such buyer-seller differences in pricing decisions of monetary lotteries. To also be able to test possible buyer-seller differences in memory and learning, we study pricing decisions from experience, obtained with the sampling paradigm, where people learn about a lottery's payoff distribution from sequential sampling. We first formalize different accounts as models within three computational frameworks (reinforcement learning, instance-based learning theory, and cumulative prospect theory), and then fit the models to empirical selling and buying prices. In Study 1 (a reanalysis of published data with hypothetical decisions), models assuming buyer-seller differences in response bias (implementing a strategic-misrepresentation account) performed best; models assuming buyer-seller differences in choice sensitivity or memory (implementing a loss-attention account) generally fared worst. In a new experiment involving incentivized decisions (Study 2), models assuming buyer-seller differences in both outcome sensitivity (as proposed by a loss-aversion account) and response bias performed best. In both Study 1 and 2, the models implemented in cumulative prospect theory performed best. Model recovery studies validated our cognitive modeling approach, showing that the models can be distinguished rather well. In summary, our analysis supports a loss-aversion account of the endowment effect, but also reveals a substantial contribution of simple response bias.
A chain-retrieval model for voluntary task switching.
Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick
2012-09-01
To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
NASA Astrophysics Data System (ADS)
Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu
2017-07-01
This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
GEM-CEDAR Study of Ionospheric Energy Input and Joule Dissipation
NASA Technical Reports Server (NTRS)
Rastaetter, Lutz; Kuznetsova, Maria M.; Shim, Jasoon
2012-01-01
We are studying ionospheric model performance for six events selected for the GEM-CEDAR modeling challenge. DMSP measurements of electric and magnetic fields are converted into Poynting Flux values that estimate the energy input into the ionosphere. Models generate rates of ionospheric Joule dissipation that are compared to the energy influx. Models include the ionosphere models CTIPe and Weimer and the ionospheric electrodynamic outputs of global magnetosphere models SWMF, LFM, and OpenGGCM. This study evaluates the model performance in terms of overall balance between energy influx and dissipation and tests the assumption that Joule dissipation occurs locally where electromagnetic energy flux enters the ionosphere. We present results in terms of skill scores now commonly used in metrics and validation studies and we can measure the agreement in terms of temporal and spatial distribution of dissipation (i.e, location of auroral activity) along passes of the DMSP satellite with the passes' proximity to the magnetic pole and solar wind activity level.
Truong, Lisa; Ouedraogo, Gladys; Pham, LyLy; Clouzeau, Jacques; Loisel-Joubert, Sophie; Blanchet, Delphine; Noçairi, Hicham; Setzer, Woodrow; Judson, Richard; Grulke, Chris; Mansouri, Kamel; Martin, Matthew
2018-02-01
In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was developed. Systemic effect levels were curated from ToxRefDB, HESS-DB and COSMOS-DB from numerous study types totaling 4379 in vivo studies for 1247 chemicals. Observed systemic effects in mammalian models are a complex function of chemical dynamics, kinetics, and inter- and intra-individual variability. To address this complex problem, systemic effect levels were modeled at the study-level by leveraging study covariates (e.g., study type, strain, administration route) in addition to multiple descriptor sets, including chemical (ToxPrint, PaDEL, and Physchem), biological (ToxCast), and kinetic descriptors. Using random forest modeling with cross-validation and external validation procedures, study-level covariates alone accounted for approximately 15% of the variance reducing the root mean squared error (RMSE) from 0.96 log 10 to 0.85 log 10 mg/kg/day, providing a baseline performance metric (lower expectation of model performance). A consensus model developed using a combination of study-level covariates, chemical, biological, and kinetic descriptors explained a total of 43% of the variance with an RMSE of 0.69 log 10 mg/kg/day. A benchmark model (upper expectation of model performance) was also developed with an RMSE of 0.5 log 10 mg/kg/day by incorporating study-level covariates and the mean effect level per chemical. To achieve a representative chemical-level prediction, the minimum study-level predicted and observed effect level per chemical were compared reducing the RMSE from 1.0 to 0.73 log 10 mg/kg/day, equivalent to 87% of predictions falling within an order-of-magnitude of the observed value. Although biological descriptors did not improve model performance, the final model was enriched for biological descriptors that indicated xenobiotic metabolism gene expression, oxidative stress, and cytotoxicity, demonstrating the importance of accounting for kinetics and non-specific bioactivity in predicting systemic effect levels. Herein, we generated an externally predictive model of systemic effect levels for use as a safety assessment tool and have generated forward predictions for over 30,000 chemicals.
Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.
2014-01-01
Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Integrating Human Factors into Crew Exploration Vehicle (CEV) Design
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Holden, Kritina; Baggerman, Susan; Campbell, Paul
2007-01-01
The purpose of this design process is to apply Human Engineering (HE) requirements and guidelines to hardware/software and to provide HE design, analysis and evaluation of crew interfaces. The topics include: 1) Background/Purpose; 2) HE Activities; 3) CASE STUDY: Net Habitable Volume (NHV) Study; 4) CASE STUDY: Human Modeling Approach; 5) CASE STUDY: Human Modeling Results; 6) CASE STUDY: Human Modeling Conclusions; 7) CASE STUDY: Human-in-the-Loop Evaluation Approach; 8) CASE STUDY: Unsuited Evaluation Results; 9) CASE STUDY: Suited Evaluation Results; 10) CASE STUDY: Human-in-the-Loop Evaluation Conclusions; 11) Near-Term Plan; and 12) In Conclusion
Comparative mRNA analysis of behavioral and genetic mouse models of aggression.
Malki, Karim; Tosto, Maria G; Pain, Oliver; Sluyter, Frans; Mineur, Yann S; Crusio, Wim E; de Boer, Sietse; Sandnabba, Kenneth N; Kesserwani, Jad; Robinson, Edward; Schalkwyk, Leonard C; Asherson, Philip
2016-04-01
Mouse models of aggression have traditionally compared strains, most notably BALB/cJ and C57BL/6. However, these strains were not designed to study aggression despite differences in aggression-related traits and distinct reactivity to stress. This study evaluated expression of genes differentially regulated in a stress (behavioral) mouse model of aggression with those from a recent genetic mouse model aggression. The study used a discovery-replication design using two independent mRNA studies from mouse brain tissue. The discovery study identified strain (BALB/cJ and C57BL/6J) × stress (chronic mild stress or control) interactions. Probe sets differentially regulated in the discovery set were intersected with those uncovered in the replication study, which evaluated differences between high and low aggressive animals from three strains specifically bred to study aggression. Network analysis was conducted on overlapping genes uncovered across both studies. A significant overlap was found with the genetic mouse study sharing 1,916 probe sets with the stress model. Fifty-one probe sets were found to be strongly dysregulated across both studies mapping to 50 known genes. Network analysis revealed two plausible pathways including one centered on the UBC gene hub which encodes ubiquitin, a protein well-known for protein degradation, and another on P38 MAPK. Findings from this study support the stress model of aggression, which showed remarkable molecular overlap with a genetic model. The study uncovered a set of candidate genes including the Erg2 gene, which has previously been implicated in different psychopathologies. The gene networks uncovered points at a Redox pathway as potentially being implicated in aggressive related behaviors. © 2016 Wiley Periodicals, Inc.
Collins, G S; Reitsma, J B; Altman, D G; Moons, K G M
2015-02-01
Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study, regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). © 2015 Joint copyright. The Authors and Annals of Internal Medicine. Diabetic Medicine published by John Wiley Ltd. on behalf of Diabetes UK.
NASA Astrophysics Data System (ADS)
Pratama, C.; Ito, T.; Sasajima, R.; Tabei, T.; Kimata, F.; Gunawan, E.; Ohta, Y.; Yamashina, T.; Ismail, N.; Muksin, U.; Maulida, P.; Meilano, I.; Nurdin, I.; Sugiyanto, D.; Efendi, J.
2017-12-01
Postseismic deformation following the 2012 Indian Ocean earthquake has been modeled by several studies (Han et al. 2015, Hu et al. 2016, Masuti et al. 2016). Although each study used different method and dataset, the previous studies constructed a significant difference of earth structure. Han et al. (2015) ignored subducting slab beneath Sumatra while Masuti et al. (2016) neglect sphericity of the earth. Hu et al. (2016) incorporated elastic slab and spherical earth but used uniform rigidity in each layer of the model. As a result, Han et al. (2015) model estimated one order higher Maxwell viscosity than the Hu et al. (2016) and half order lower Kelvin viscosity than the Masuti et al. (2016) model predicted. In the present study, we conduct a quantitative analysis of each heterogeneous geometry and parameter effect on rheology inference. We develop heterogeneous three-dimensional spherical-earth finite element models. We investigate the effect of subducting slab, spherical earth, and three-dimensional earth rigidity on estimated lithosphere-asthenosphere rheology beneath the Indian Ocean. A wide range of viscosity structure from time constant rheology to time dependent rheology was chosen as previous studies have been modeled. In order to evaluate actual displacement, we compared the model to the Global Navigation Satellite System (GNSS) observation. We incorporate the GNSS data from previous studies and introduce new GNSS site as a part of the Indonesian Continuously Operating Reference Stations (InaCORS) located in Sumatra that has not been used in the last analysis. As a preliminary result, we obtained the effect of the spherical earth and elastic slab when we assumed burgers rheology. The model that incorporates the sphericity of the earth needs a one third order lower viscosity than the model that neglects earth curvature. The model that includes elastic slab needs half order lower viscosity than the model that excluding the elastic slab.
Liang, Li-Jung; Huang, David; Brecht, Mary-Lynn; Hser, Yih-ing
2010-01-01
Studies examining differences in mortality among long-term drug users have been limited. In this paper, we introduce a Bayesian framework that jointly models survival data using a Weibull proportional hazard model with frailty, and substance and alcohol data using mixed-effects models, to examine differences in mortality among heroin, cocaine, and methamphetamine users from five long-term follow-up studies. The traditional approach to analyzing combined survival data from numerous studies assumes that the studies are homogeneous, thus the estimates may be biased due to unobserved heterogeneity among studies. Our approach allows us to structurally combine the data from different studies while accounting for correlation among subjects within each study. Markov chain Monte Carlo facilitates the implementation of Bayesian analyses. Despite the complexity of the model, our approach is relatively straightforward to implement using WinBUGS. We demonstrate our joint modeling approach to the combined data and discuss the results from both approaches. PMID:21052518
Building energy modeling for green architecture and intelligent dashboard applications
NASA Astrophysics Data System (ADS)
DeBlois, Justin
Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the representation of unpredictable occupancy patterns on model results. Combined, these studies inform modelers and researchers on frameworks for simulating holistically designed architecture and improving the interaction between models and building occupants, in residential and commercial settings. v
Target signature modeling and bistatic scattering measurement studies
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Lee, T. H.; Rojas, R.; Marhefka, R. J.; Bensman, D.
1989-01-01
Four areas of study are summarized: bistatic scattering measurements studies for a compact range; target signature modeling for test and evaluation hardware in the loop situation; aircraft code modification study; and SATCOM antenna studies on aircraft.
Dhanavade, Maruti J; Jalkute, Chidambar B; Barage, Sagar H; Sonawane, Kailas D
2013-12-01
Cysteine protease is known to degrade amyloid beta peptide which is a causative agent of Alzheimer's disease. This cleavage mechanism has not been studied in detail at the atomic level. Hence, a three-dimensional structure of cysteine protease from Xanthomonas campestris was constructed by homology modeling using Geno3D, SWISS-MODEL, and MODELLER 9v7. All the predicted models were analyzed by PROCHECK and PROSA. Three-dimensional model of cysteine protease built by MODELLER 9v7 shows similarity with human cathepsin B crystal structure. This model was then used further for docking and simulation studies. The molecular docking study revealed that Cys17, His87, and Gln88 residues of cysteine protease form an active site pocket similar to human cathepsin B. Then the docked complex was refined by molecular dynamic simulation to confirm its stable behavior over the entire simulation period. The molecular docking and MD simulation studies showed that the sulfhydryl hydrogen atom of Cys17 of cysteine protease interacts with carboxylic oxygen of Lys16 of Aβ peptide indicating the cleavage site. Thus, the cysteine protease model from X. campestris having similarity with human cathepsin B crystal structure may be used as an alternate approach to cleave Aβ peptide a causative agent of Alzheimer's disease. © 2013 Elsevier Ltd. All rights reserved.
Nasseh, Daniel; Engel, Jutta; Mansmann, Ulrich; Tretter, Werner; Stausberg, Jürgen
2014-01-01
Confidentiality of patient data in the field of medical informatics is an important task. Leaked sensitive information within this data can be adverse to and being abused against a patient. Therefore, when working with medical data, appropriate and secure models which serve as guidelines for different applications are needed. Consequently, this work presents a model for performing a privacy preserving record linkage between study and registry data. The model takes into account seven requirements related to data privacy. Furthermore, this model is exemplified with a study on family based colorectal cancer in Germany. The model is very strict and excludes possible violations towards data privacy protection to a reasonable degree. It should be applicable to similar use cases which are in need of a mapping between medical data of a study and a registry database.
A comparative study on GM (1,1) and FRMGM (1,1) model in forecasting FBM KLCI
NASA Astrophysics Data System (ADS)
Ying, Sah Pei; Zakaria, Syerrina; Mutalib, Sharifah Sakinah Syed Abd
2017-11-01
FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBM KLCI) is a group of indexes combined in a standardized way and is used to measure the Malaysia overall market across the time. Although composite index can give ideas about stock market to investors, it is hard to predict accurately because it is volatile and it is necessary to identify a best model to forecast FBM KLCI. The objective of this study is to determine the most accurate forecasting model between GM (1,1) model and Fourier Residual Modification GM (1,1) (FRMGM (1,1)) model to forecast FBM KLCI. In this study, the actual daily closing data of FBM KLCI was collected from January 1, 2016 to March 15, 2016. GM (1,1) model and FRMGM (1,1) model were used to build the grey model and to test forecasting power of both models. Mean Absolute Percentage Error (MAPE) was used as a measure to determine the best model. Forecasted value by FRMGM (1,1) model do not differ much than the actual value compare to GM (1,1) model for in-sample and out-sample data. Results from MAPE also show that FRMGM (1,1) model is lower than GM (1,1) model for in-sample and out-sample data. These results shown that FRMGM (1,1) model is better than GM (1,1) model to forecast FBM KLCI.
Recent development of risk-prediction models for incident hypertension: An updated systematic review
Xiao, Lei; Liu, Ya; Wang, Zuoguang; Li, Chuang; Jin, Yongxin; Zhao, Qiong
2017-01-01
Background Hypertension is a leading global health threat and a major cardiovascular disease. Since clinical interventions are effective in delaying the disease progression from prehypertension to hypertension, diagnostic prediction models to identify patient populations at high risk for hypertension are imperative. Methods Both PubMed and Embase databases were searched for eligible reports of either prediction models or risk scores of hypertension. The study data were collected, including risk factors, statistic methods, characteristics of study design and participants, performance measurement, etc. Results From the searched literature, 26 studies reporting 48 prediction models were selected. Among them, 20 reports studied the established models using traditional risk factors, such as body mass index (BMI), age, smoking, blood pressure (BP) level, parental history of hypertension, and biochemical factors, whereas 6 reports used genetic risk score (GRS) as the prediction factor. AUC ranged from 0.64 to 0.97, and C-statistic ranged from 60% to 90%. Conclusions The traditional models are still the predominant risk prediction models for hypertension, but recently, more models have begun to incorporate genetic factors as part of their model predictors. However, these genetic predictors need to be well selected. The current reported models have acceptable to good discrimination and calibration ability, but whether the models can be applied in clinical practice still needs more validation and adjustment. PMID:29084293
Sensitivity Analysis to Turbulent Combustion Models for Combustor-Turbine Interactions
NASA Astrophysics Data System (ADS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2017-11-01
The recently-updated Open National CombustionCode (Open NCC) equipped with alarge-eddy simulation (LES) is applied to model the flow field inside the Energy Efficient Engine (EEE) in conjunction with sensitivity analysis to turbulent combustion models. In this study, we consider three different turbulence-combustion interaction models, the Eddy-Breakup model (EBU), the Linear-Eddy Model (LEM) and the Probability Density Function (PDF)model as well as the laminar chemistry model. Acomprehensive comparison of the flow field and the flame structure will be provided. One of our main interests isto understand how a different model predicts thermal variation on the surface of the first stage vane. Considering that these models are often used in combustor/turbine communities, this study should provide some guidelines on numerical modeling of combustor-turbine interactions.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
Poss, J E
2001-06-01
This article discusses the development of a new model representing the synthesis of two models that are often used to study health behaviors: the Health Belief Model and the Theory of Reasoned Action. The new model was developed as the theoretic framework for an investigation of the factors affecting participation by Mexican migrant workers in tuberculosis screening. Development of the synthesized model evolved from the concern that models used to investigate health-seeking behaviors of mainstream Anglo groups in the United States might not be appropriate for studying migrant workers or persons from other cultural backgrounds.
NASA Astrophysics Data System (ADS)
Ercan, Mehmet Bulent
Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.
Modelling dimercaptosuccinic acid (DMSA) plasma kinetics in humans.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Meulenbelt, Jan; Hunault, Claudine C
2016-11-01
No kinetic models presently exist which simulate the effect of chelation therapy on lead blood concentrations in lead poisoning. Our aim was to develop a kinetic model that describes the kinetics of dimercaptosuccinic acid (DMSA; succimer), a commonly used chelating agent, that could be used in developing a lead chelating model. This was a kinetic modelling study. We used a two-compartment model, with a non-systemic gastrointestinal compartment (gut lumen) and the whole body as one systemic compartment. The only data available from the literature were used to calibrate the unknown model parameters. The calibrated model was then validated by comparing its predictions with measured data from three different experimental human studies. The model predicted total DMSA plasma and urine concentrations measured in three healthy volunteers after ingestion of DMSA 10 mg/kg. The model was then validated by using data from three other published studies; it predicted concentrations within a factor of two, representing inter-human variability. A simple kinetic model simulating the kinetics of DMSA in humans has been developed and validated. The interest of this model lies in the future potential to use it to predict blood lead concentrations in lead-poisoned patients treated with DMSA.
Host Model Uncertainty in Aerosol Radiative Effects: the AeroCom Prescribed Experiment and Beyond
NASA Astrophysics Data System (ADS)
Stier, Philip; Schutgens, Nick; Bian, Huisheng; Boucher, Olivier; Chin, Mian; Ghan, Steven; Huneeus, Nicolas; Kinne, Stefan; Lin, Guangxing; Myhre, Gunnar; Penner, Joyce; Randles, Cynthia; Samset, Bjorn; Schulz, Michael; Yu, Hongbin; Zhou, Cheng; Bellouin, Nicolas; Ma, Xiaoyan; Yu, Fangqun; Takemura, Toshihiko
2013-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. Multi-model "diversity" in estimates of the aerosol radiative effect is often perceived as a measure of the uncertainty in modelling aerosol itself. However, current aerosol models vary considerably in model components relevant for the calculation of aerosol radiative forcings and feedbacks and the associated "host-model uncertainties" are generally convoluted with the actual uncertainty in aerosol modelling. In the AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in eleven participating models. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention. However, uncertainties in aerosol radiative effects also include short-term and long-term feedback processes that will be systematically explored in future intercomparison studies. Here we will present an overview of the proposals for discussion and results from early scoping studies.
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.