1988-10-01
TURBISTAN). En ce qui concerne les materiaux pour aubes de turbine la question reste ouverte. La caracterisation des materiaux pour... caracterisation mecanique. Face aux couts des caracterisations et qualifications de materiaux pour disques de lurbomachines, il importe de tenter...de concentrer ses propres forces : une solution possible est de constituer des plans standard de caracterisation . Federer au sein d’un groupe de
Caracterisation thermique de modules de refroidissement pour la photovoltaique concentree
NASA Astrophysics Data System (ADS)
Collin, Louis-Michel
Pour rentabiliser la technologie des cellules solaires, une reduction du cout d'exploitation et de fabrication est necessaire. L'utilisation de materiaux photovoltaiques a un impact appreciable sur le prix final par quantite d'energie produite. Une technologie en developpement consiste a concentrer la lumiere sur les cellules solaires afin de reduire cette quantite de materiaux. Or, concentrer la lumiere augmente la temperature de la cellule et diminue ainsi son efficacite. Il faut donc assurer a la cellule un refroidissement efficace. La charge thermique a evacuer de la cellule passe au travers du recepteur, soit la composante soutenant physiquement la cellule. Le recepteur transmet le flux thermique de la cellule a un systeme de refroidissement. L'ensemble recepteur-systeme de refroidissement se nomme module de refroidissement. Habituellement, la surface du recepteur est plus grande que celle de la cellule. La chaleur se propage donc lateralement dans le recepteur au fur et a mesure qu'elle traverse le recepteur. Une telle propagation de la chaleur fournit une plus grande surface effective, reduisant la resistance thermique apparente des interfaces thermiques et du systeme de refroidissement en aval vers le module de refroidissement. Actuellement, aucune installation ni methode ne semble exister afin de caracteriser les performances thermiques des recepteurs. Ce projet traite d'une nouvelle technique de caracterisation pour definir la diffusion thermique du recepteur a l'interieur d'un module de refroidissement. Des indices de performance sont issus de resistances thermiques mesurees experimentalement sur les modules. Une plateforme de caracterisation est realisee afin de mesurer experimentalement les criteres de performance. Cette plateforme injecte un flux thermique controle sur une zone localisee de la surface superieure du recepteur. L'injection de chaleur remplace le flux thermique normalement fourni par la cellule. Un systeme de refroidissement est installe a la surface opposee du recepteur pour evacuer la chaleur injectee. Les resultats mettent egalement en evidence l'importance des interfaces thermiques et les avantages de diffuser la chaleur dans les couches metalliques avant de la conduire au travers des couches dielectriques du recepteur. Des recepteurs de multiples compositions ont ete caracterises, demontrant que les outils developpes peuvent definir la capacite de diffusion thermique. La repetabilite de la plateforme est evaluee par l'analyse de l'etendue des mesures repetees sur des echantillons selectionnes. La plateforme demontre une precision et reproductibilite de +/- 0.14 ° C/W. Ce travail fournit des outils pour la conception des recepteurs en proposant une mesure qui permet de comparer et d'evaluer l'impact thermique de ces recepteurs integres a uri module de refroidissement. Mots-cles : cellule solaire, photovoltaique, transfert de chaleur, concentration, resistances thermiques, plateforme de caracterisation, refroidissement
Land Operations in the Year 2020 (LO2020) (Operations terrestres a l’horizon 2020 (LO2020)).
1999-03-01
CAPABILITIES Technologies [ ] □ [500-700] n[>70°] 186 APPENDIX 4 to ANNEX V SHORT LISTED TECHNOLOGIES CARACTERISED REGARDING CC 1. top... CARACTERISATION MATRIX techno Legend: no relevance weak relevance good relevance strong relevance 189 KEY TECHNOLOGIES CARACTERISED REGARDING COST (34
1979-10-01
AD-AO95 392 DEFENCE RESEARCH ESTABLISHMENT OTTAWA (ONTARIO) FI6 8/13 CARACTERISATION PHY SI QUE DES SOLS CAMP MILITAIRE DE PETAWAWA (P--ETC(U) OCT 79...defense nationale 2b, GROUP Ottawa, Ontario -KIA-04----- , I D )kI1,vlt Nr Ti fti CARACTERISATION PHYSIQUE DES SOLS, BASE DES FORCES CANADIENNES PETAWAAA I
1994-02-01
devait servir de forum pour un 6change d’informations sur ce sujet important. Dans ce cas. [a caracterisation se ref~re A l’analyse du comportement des...phases de d~veloppement et de mise a l’echelle. mais aussi aux activites de caracterisation et de demonstration des composants. Dans le cas des materiaux
Environmental Life Cycle Techniques for New Weapons Acquisition Systems
2004-09-01
Amount) Total grenade War Caracterisation factors (MJ/nn) Unit (Impact indicator) Total grenade War Total of all compartments MJ 59200 82700... Caracterisation factors (Kg CFC11 eq/nn) Unit (Impact indicator) Total grenade War Total of all compartments kg CFC-11 0,000429 0,00046 Remaining...Substance Compartment Unit (Amount) Total grenade War Caracterisation factors (Kg C2H2/nn) Unit (Impact indicator) Total grenade War Total of all
Airframe/Propulsion Interference
1975-03-01
elancee, en incidence, est prcsenic figure 14. L’intrados se caracterise par une divergence des lignes dc courant, et par une reducti.in ’es nombres...ale de la zone dissipative en R et A/S une fonction du nombre de Hach au recollement Mq et d’un parametre de forme Hiq caracterisant le...extralte d’une Etude expE- rimentale prEsentEe rEf. [13]. Bile se caractErise par un elancement assez important et un rapport de tronoature ft^ /\\M.t
2003-09-26
sur Ia caracterisation environnementale de leurs secteurs d’entrainement majeurs afin d’ameliorer les connaissances des impacts de tous les types...et geographique. En 2001, Ia premiere phase de cette etude a consiste en la caracterisation hydrogeologique partielle dans la portion nord du...secteur d’entrainement. Cette premiere phase a implique le forage de 42 puits, afin de caracteriser la dynamique et la qualite des eaux souterraines. En
1991-07-01
example, caracterises par l’existence d’une pointe de survitesse importante, c’est le nombre de Reynolds qui r~git It "caract~re transitionnel’" de... caracterisation d’un Ecoulement cisaillC tridimensionnel autour d’une aile en fl~che. Une precedente Etude avait Etd nen~e qui avait pour but de qualifier I...su.vant iea configi-rations Attdi4es. 4 -m CARACTERISATION DE L ECLKTEg2NT Ii eat admis. anlon [22), que iclatement tourbiiionna~re est caravteris6
Evaluation de la qualite osseuse par les ondes guidees ultrasonores =
NASA Astrophysics Data System (ADS)
Abid, Alexandre
La caracterisation des proprietes mecaniques de l'os cortical est un domaine d'interet pour la recherche orthopedique. En effet, cette caracterisation peut apporter des informations primordiales pour determiner le risque de fracture, la presence de microfractures ou encore depister l'osteoporose. Les deux principales techniques actuelles de caracterisation de ces proprietes sont le Dual-energy X-ray Absorptiometry (DXA) et le Quantitative Computed Tomogaphy (QCT). Ces techniques ne sont pas optimales et presentent certaines limites, ainsi l'efficacite du DXA est questionnee dans le milieu orthopedique tandis que le QCT necessite des niveaux de radiations problematiques pour en faire un outil de depistage. Les ondes guidees ultrasonores sont utilisees depuis de nombreuses annees pour detecter les fissures, la geometrie et les proprietes mecaniques de cylindres, tuyaux et autres structures dans des milieux industriels. De plus, leur utilisation est plus abordable que celle du DXA et n'engendrent pas de radiation ce qui les rendent prometteuses pour detecter les proprietes mecaniques des os. Depuis moins de dix ans, de nombreux laboratoires de recherche tentent de transposer ces techniques au monde medical, en propageant les ondes guidees ultrasonores dans les os. Le travail presente ici a pour but de demontrer le potentiel des ondes guidees ultrasonores pour determiner l'evolution des proprietes mecaniques de l'os cortical. Il commence par une introduction generale sur les ondes guidees ultrasonores et une revue de la litterature des differentes techniques relatives a l'utilisation des ondes guidees ultrasonores sur les os. L'article redige lors de ma maitrise est ensuite presente. L'objectif de cet article est d'exciter et de detecter certains modes des ondes guides presentant une sensibilite a la deterioration des proprietes mecaniques de l'os cortical. Ce travail est realise en modelisant par elements finis la propagation de ces ondes dans deux modeles osseux cylindriques. Ces deux modeles sont composes d'une couche peripherique d'os cortical et remplis soit d'os trabeculaire soit de moelle osseuse. Ces deux modeles permettent d'obtenir deux geometries, chacune propice a la propagation circonferentielle ou longitudinale des ondes guidees. Les resultats, ou trois differents modes ont pu etre identifies, sont compares avec des donnees experimentales obtenues avec des fantomes osseux et theoriques. La sensibilite de chaque mode pour les differents parametres des proprietes mecaniques est alors etudiee ce qui permet de conclure sur le potentiel de chaque mode quant a la prediction de risque de fracture ou de presence de microfractures.
Stochastic Pseudo-Boolean Optimization
2011-07-31
Right-Hand Side,” 2009 IN- FORMS Annual Meeting, San Diego, CA, October 11-14, 2009. 113 References [1] A.-Ghouila-Houri. Caracterisation des matrices...Optimization, 10:7–21, 2005. [30] P. Camion. Caracterisation des matrices unimodulaires. Cahiers Centre Etudes Rech., 5(4), 1963. [31] P. Camion
A New Approach to Electrical Characterization of Exploding Foil Initiators
1998-12-01
processed to illustrate the methodology. RESUME Dans une etude precedente de 1a caracterisation electrique des detonateurs a element projete (DEP), on a...applicable a 1a caracterisation electrique des DEP et decrit la methodologie experimentale adequate. Cette methodologie est illustree par la
Modern Data Analysis techniques in Noise and Vibration Problems
1981-11-01
Hilbert l’une de l’autre. Cette propriete se retrouve dans l’etude de la causalite : ce qui de- finit un critere pratique caracterisant un signal donc, par...entre Ie champ direct et Ie champ reflechi se caracterisent loca- lement par l’existence de frequences pour lesquelles l’interference est totale
NASA Astrophysics Data System (ADS)
Salissou, Yacoubou
L'objectif global vise par les travaux de cette these est d'ameliorer la caracterisation des proprietes macroscopiques des materiaux poreux a structure rigide ou souple par des approches inverses et indirectes basees sur des mesures acoustiques faites en tube d'impedance. La precision des approches inverses et indirectes utilisees aujourd'hui est principalement limitee par la qualite des mesures acoustiques obtenues en tube d'impedance. En consequence, cette these se penche sur quatre problemes qui aideront a l'atteinte de l'objectif global precite. Le premier probleme porte sur une caracterisation precise de la porosite ouverte des materiaux poreux. Cette propriete en est une de passage permettant de lier la mesure des proprietes dynamiques acoustiques d'un materiau poreux aux proprietes effectives de sa phase fluide decrite par les modeles semi-phenomenologiques. Le deuxieme probleme traite de l'hypothese de symetrie des materiaux poreux selon leur epaisseur ou un index et un critere sont proposes pour quantifier l'asymetrie d'un materiau. Cette hypothese est souvent source d'imprecision des methodes de caracterisation inverses et indirectes en tube d'impedance. Le critere d'asymetrie propose permet ainsi de s'assurer de l'applicabilite et de la precision de ces methodes pour un materiau donne. Le troisieme probleme vise a mieux comprendre le probleme de transmission sonore en tube d'impedance en presentant pour la premiere fois un developpement exact du probleme par decomposition d'ondes. Ce developpement permet d'etablir clairement les limites des nombreuses methodes existantes basees sur des tubes de transmission a 2, 3 ou 4 microphones. La meilleure comprehension de ce probleme de transmission est importante puisque c'est par ce type de mesures que des methodes permettent d'extraire successivement la matrice de transfert d'un materiau poreux et ses proprietes dynamiques intrinseques comme son impedance caracteristique et son nombre d'onde complexe. Enfin, le quatrieme probleme porte sur le developpement d'une nouvelle methode de transmission exacte a 3 microphones applicable a des materiaux ou systemes symetriques ou non. Dans le cas symetrique, on montre que cette approche permet une nette amelioration de la caracterisation des proprietes dynamiques intrinseques d'un materiau. Mots cles. materiaux poreux, tube d'impedance, transmission sonore, absorption sonore, impedance acoustique, symetrie, porosite, matrice de transfert.
1993-11-01
sont caracterises par la striosconic continue tic la la partie tie la couche tie melange situde sous le jet figure 3 ainsi que par la tomoscopie tie... caracterisent les ondzs). Ces ondes un disque; de Mach. Sur la figuire 4, on observe la semblent proveriir tie la r6gion tie l’jccteur, juste trace du
NASA Astrophysics Data System (ADS)
Amrani, Salah
La fabrication de l'aluminium est realisee dans une cellule d'electrolyse, et cette operation utilise des anodes en carbone. L'evaluation de la qualite de ces anodes reste indispensable avant leur utilisation. La presence des fissures dans les anodes provoque une perturbation du procede l'electrolyse et une diminution de sa performance. Ce projet a ete entrepris pour determiner l'impact des differents parametres de procedes de fabrication des anodes sur la fissuration des anodes denses. Ces parametres incluent ceux de la fabrication des anodes crues, des proprietes des matieres premieres et de la cuisson. Une recherche bibliographique a ete effectuee sur tous les aspects de la fissuration des anodes en carbone pour compiler les travaux anterieurs. Une methodologie detaillee a ete mise au point pour faciliter le deroulement des travaux et atteindre les objectifs vises. La majorite de ce document est reservee pour la discussion des resultats obtenus au laboratoire de l'UQAC et au niveau industriel. Concernant les etudes realisees a l'UQAC, une partie des travaux experimentaux est reservee a la recherche des differents mecanismes de fissuration dans les anodes denses utilisees dans l'industrie d'aluminium. L'approche etait d'abord basee sur la caracterisation qualitative du mecanisme de la fissuration en surface et en profondeur. Puis, une caracterisation quantitative a ete realisee pour la determination de la distribution de la largeur de la fissure sur toute sa longueur, ainsi que le pourcentage de sa surface par rapport a la surface totale de l'echantillon. Cette etude a ete realisee par le biais de la technique d'analyse d'image utilisee pour caracteriser la fissuration d'un echantillon d'anode cuite. L'analyse surfacique et en profondeur de cet echantillon a permis de voir clairement la formation des fissures sur une grande partie de la surface analysee. L'autre partie des travaux est basee sur la caracterisation des defauts dans des echantillons d'anodes crues fabriquees industriellement. Cette technique a consiste a determiner le profil des differentes proprietes physiques. En effet, la methode basee sur la mesure de la distribution de la resistivite electrique sur la totalite de l'echantillon est la technique qui a ete utilisee pour localiser la fissuration et les macro-pores. La microscopie optique et l'analyse d'image ont, quant a elles, permis de caracteriser les zones fissurees tout en determinant la structure des echantillons analyses a l'echelle microscopique. D'autres tests ont ete menes, et ils ont consiste a etudier des echantillons cylindriques d'anodes de 50 mm de diametre et de 130 mm de longueur. Ces derniers ont ete cuits dans un four a UQAC a differents taux de chauffage dans le but de pouvoir determiner l'influence des parametres de cuisson sur la formation de la fissuration dans ce genre de carottes. La caracterisation des echantillons d'anodes cuites a ete faite a l'aide de la microscopie electronique a balayage et de l'ultrason. La derniere partie des travaux realises a l'UQAC contient une etude sur la caracterisation des anodes fabriquees au laboratoire sous differentes conditions d'operation. L'evolution de la qualite de ces anodes a ete faite par l'utilisation de plusieurs techniques. L'evolution de la temperature de refroidissement des anodes crues de laboratoire a ete mesuree; et un modele mathematique a ete developpe et valide avec les donnees experimentales. Cela a pour objectif d'estimer la vitesse de refroidissement ainsi que le stress thermique. Toutes les anodes fabriquees ont ete caracterisees avant la cuisson par la determination de certaines proprietes physiques (resistivite electrique, densite apparente, densite optique et pourcentage de defauts). La tomographie et la distribution de la resistivite electrique, qui sont des techniques non destructives, ont ete employees pour evaluer les defauts internes des anodes. Pendant la cuisson des anodes de laboratoire, l'evolution de la resistivite electrique a ete suivie et l'etape de devolatilisation a ete identifiee. Certaines anodes ont ete cuites a differents taux de chauffage (bas, moyen, eleve et un autre combine) dans l'objectif de trouver les meilleures conditions de cuisson en vue de minimiser la fissuration. D'autres anodes ont ete cuites a differents niveaux de cuisson, cela dans le but d'identifier a quelle etape de l'operation de cuisson la fissuration commence a se developper. Apres la cuisson, les anodes ont ete recuperees pour, a nouveau, faire leur caracterisation par les memes techniques utilisees precedemment. L'objectif principal de cette partie etait de reveler l'impact de differents parametres sur le probleme de fissuration, qui sont repartis sur toute la chaine de production des anodes. Le pourcentage de megots, la quantite de brai et la distribution des particules sont des facteurs importants a considerer pour etudier l'effet de la matiere premiere sur le probleme de la fissuration. Concernant l'effet des parametres du procede de fabrication sur le meme probleme, le temps de vibration, la pression de compaction et le procede de refroidissement ont ete a la base de cette etude. Finalement, l'influence de la phase de cuisson sur l'apparition de la fissuration a ete prise en consideration par l'intermediaire du taux de chauffage et du niveau de cuisson. Les travaux realises au niveau industriel ont ete faits lors d'une campagne de mesure dans le but d'evaluer la qualite des anodes de carbone en general et l'investigation du probleme de fissuration en particulier. Ensuite, il s'agissait de reveler les effets de differents parametres sur le probleme de la fissuration. Vingt-quatre anodes cuites ont ete utilisees. Elles ont ete fabriquees avec differentes matieres premieres (brai, coke, megots) et sous diverses conditions (pression, temps de vibration). Le parametre de la densite de fissuration a ete calcule en se basant sur l'inspection visuelle de la fissuration des carottes. Cela permet de classifier les differentes fissurations en plusieurs categories en se basant sur certains criteres tels que le type de fissures (horizontale, verticale et inclinee), leurs localisations longitudinales (bas, milieu et haut de l'anode) et transversales (gauche, centrale et droite). Les effets de la matiere premiere, les parametres de fabrication des anodes crues ainsi que les conditions de cuisson sur la fissuration ont ete etudies. La fissuration des anodes denses en carbones cause un serieux probleme pour l'industrie d'aluminium primaire. La realisation de ce projet a permis la revelation de differents mecanismes de fissuration, la classification de fissuration par plusieurs criteres (position, types localisation) et l'evaluation de l'impact de differents parametres sur la fissuration. Les etudes effectuees dans le domaine de cuisson ont donne la possibilite d'ameliorer l'operation et reduire la fissuration des anodes. Le travail consiste aussi a identifier des techniques capables d'evaluer la qualite d'anodes (l'ultrason, la tomographie et la distribution de la resistivite electrique). La fissuration des anodes en carbone est consideree comme un probleme complexe, car son apparition depend de plusieurs parametres repartis sur toute la chaine de production. Dans ce projet, plusieurs nouvelles etudes ont ete realisees, et elles permettent de donner de l'originalite aux travaux de recherches faits dans le domaine de la fissuration des anodes de carbone pour l'industrie de l'aluminium primaire. Les etudes realisees dans ce projet permettent d'ajouter d'un cote, une valeur scientifique pour mieux comprendre le probleme de fissuration des anodes et d'un autre cote, d'essayer de proposer des methodes qui peuvent reduire ce probleme a l'echelle industrielle.
NASA Astrophysics Data System (ADS)
Boussaboun, Zakariae
Les mineraux d'argile sont des catalyseurs possibles pour la formation du graphene a partir de precurseurs organiques, comme le saccharose. Les argiles sont abondantes, securitaires et economiques pour la formation du graphene. L'objectif principal de ce memoire est de demontrer qu'il est possible de synthetiser un materiau hybride contenant de l'argile et du graphene. La preparation de ces materiaux carbones a base de l'argile (bentonite et cloisite) et le saccharose a ete realisee selon deux methodes. La premiere methode est faite en trois etapes : 1) periode de contact entre l'argile et la source de carbone dans un environnement humide, 2) infiltration de la matiere carbonee et transformation au four a micro-onde, 3) chauffage a 750°C sous azote pour obtenir des materiaux carbones. Par contre la deuxieme methode est faite en deux etapes, sans micro-onde, et avec une augmentation de la quantite de source de carbone (saccharose et alginate). La caracterisation du materiau a permis de suivre les reactions de transformation de la source de carbone vers le graphene. Cette caracterisation a ete faite par la spectroscopie IRTF et Raman, l'analyse thermogravimetrique (TGA), la surface specifique (methode BET) et le microscope electronique a balayage (MEB). La conductivite electrique a ete mesuree par un spectrometre dielectrique et en fonction de la pression appliquee avec un multimetre. Le materiau realise etait incorpore dans une matrice avec un polyethylene a basse densite pour avoir un polymere avec des caracteristiques specifiques. La conductivite thermique a ete ensuite mesuree suivant la norme ASTM E1530. L'echantillon realise avec la deuxieme methode avec une proportion de bentonite pour 5 proportions de saccharose (M2 B1 : S5) signale la possibilite de produire des materiaux de graphene a partir de ressources naturelles. La surface specifique a considerablement augmente de (75,88 m2/g) pour bentonite non traiter a (139,76 m2/g) pour l'echantillon (M2 B1 : S5). Une augmentation significative de la conductivite par pression (95,3 S/m sous une pression de 6,5 MPa par rapport a 1,45*10 -3 S/m pour la bentonite) et la conductivite thermique dans le polyethylene basse densite a une concentration de 10% d'additif (0,332 W/m.K a 0,279 W/m.K) ont ete observes pour le meme echantillon M2 B1 : S5 comparativement a la bentonite non traitee. Les applications possibles sont par exemple les senseurs et les actuateurs par pression.
NASA Astrophysics Data System (ADS)
Croteau, Etienne
L'objectif de ce projet de doctorat est de developper des outils quantitatifs pour le suivi des traitements de chimiotherapie pour le cancer du sein et de leurs effets cardiotoxiques a l'aide de l'imagerie TEP dynamique. L'analyse cinetique en TEP dynamique permet l'evaluation de parametres biologiques in vivo. Cette analyse peut etre utilise pour caracteriser la reponse tumorale a la chimiotherapie et les effets secondaires nefastes qui peuvent en resulter. Le premier article de cette these decrit la mise au point des techniques d'analyse cinetique qui utilisent la fonction d'entree d'un radiotraceur derive de l'image dynamique. Des corrections de contamination radioactive externe (epanchement) et de l'effet de volume partiel ont ete necessaires pour standardiser l'analyse cinetique et la rendre quantitative. Le deuxieme article porte sur l'evaluation d'un nouveau radiotraceur myocardique. Le 11C-acetoacetate, un nouveau radiotraceur base sur un corps cetonique, a ete compare au 11C-acetate, couramment utilise en imagerie cardiaque TEP. L'utilisation de 3H-acetate et 14C-acetoacetate ont permis d'elucider la cinetique de ces traceurs depuis la fonction d'entree et la captation par les mitochondries cardiaques qui reflete la consommation en oxygene, jusqu'a la liberation de leurs principaux metabolites reciproques (3H20 et 14CO2). Le troisieme et dernier article de cette these presente l'integration d'un modele qui evalue la reserve cardiaque de perfusion et de consommation en oxygene. Un modele de cardiomyopathie a ete etabli a l'aide d'un agent chimiotherapeutique contre le cancer du sein, la doxorubicine, reconnu comme etant cardiotoxique. Un protocole de repos/effort a permis d'evaluer la capacite d'augmentation de perfusion et de consommation en oxygene par le coeur. La demonstration d'une reserve cardiaque reduite caracterise la cardiotoxicite. La derniere contribution de cette these porte sur la mise au point de methodes peu invasives pour mesurer la fonction d'entree en modele animal avec l'utilisation de l'artere caudale et un compteur microvolumetrique, la bi-modalite TEP/IRM dynamique avec le Gd-DTPA et l'etablissement d'un modele d'evaluation simultane de cardiotoxicite et reponse tumorale chez la souris. Le developpement d'outils d'analyse TEP dans l'evaluation de la cardiotoxicite lors de traitements du canter du sein permet de mieux comprendre la relation entre les dommages mitochondriaux et la diminution de la fraction d'ejection. Mots cles : Tomographie d'emission par positrons (TEP), analyses cinetiques, IIC-acetate, 11Cacetoacetate, cardiotoxicite.
Le niobate de lithium a haute temperature pour les applications ultrasons =
NASA Astrophysics Data System (ADS)
De Castilla, Hector
L'objectif de ce travail de maitrise en sciences appliquees est de trouver puis etudier un materiau piezoelectrique qui est potentiellement utilisable dans les transducteurs ultrasons a haute temperature. En effet, ces derniers sont actuellement limites a des temperatures de fonctionnement en dessous de 300°C a cause de l'element piezoelectrique qui les compose. Palier a cette limitation permettrait des controles non destructifs par ultrasons a haute temperature. Avec de bonnes proprietes electromecaniques et une temperature de Curie elevee (1200°C), le niobate de lithium (LiNbO 3) est un bon candidat. Mais certaines etudes affirment que des processus chimiques tels que l'apparition de conductivite ionique ou l'emergence d'une nouvelle phase ne permettent pas son utilisation dans les transducteurs ultrasons au-dessus de 600°C. Cependant, d'autres etudes plus recentes ont montre qu'il pouvait generer des ultrasons jusqu'a 1000°C et qu'aucune conductivite n'etait visible. Une hypothese a donc emerge : une conductivite ionique est presente dans le niobate de lithium a haute temperature (>500°C) mais elle n'affecte que faiblement ses proprietes a hautes frequences (>100 kHz). Une caracterisation du niobate de lithium a haute temperature est donc necessaire afin de verifier cette hypothese. Pour cela, la methode par resonance a ete employee. Elle permet une caracterisation de la plupart des coefficients electromecaniques avec une simple spectroscopie d'impedance electrochimique et un modele reliant de facon explicite les proprietes au spectre d'impedance. Il s'agit de trouver les coefficients du modele permettant de superposer au mieux le modele avec les mesures experimentales. Un banc experimental a ete realise permettant de controler la temperature des echantillons et de mesurer leur impedance electrochimique. Malheureusement, les modeles actuellement utilises pour la methode par resonance sont imprecis en presence de couplages entre les modes de vibration. Cela implique de posseder plusieurs echantillons de differentes formes afin d'isoler chaque mode principal de vibration. De plus, ces modeles ne prennent pas bien en compte les harmoniques et modes en cisaillement. C'est pourquoi un nouveau modele analytique couvrant tout le spectre frequentiel a ete developpe afin de predire les resonances en cisaillement, les harmoniques et les couplages entre les modes. Neanmoins, certains modes de resonances et certains couplages ne sont toujours pas modelises. La caracterisation d'echantillons carres a pu etre menee jusqu'a 750°C. Les resultats confirment le caractere prometteur du niobate de lithium. Les coefficients piezoelectriques sont stables en fonction de la temperature et l'elasticite et la permittivite ont le comportement attendu. Un effet thermoelectrique ayant un effet similaire a de la conductivite ionique a ete observe ce qui ne permet pas de quantifier l'impact de ce dernier. Bien que des etudes complementaires soient necessaires, l'intensite des resonances a 750°C semble indiquer que le niobate de lithium peut etre utilise pour des applications ultrasons a hautes frequences (>100 kHz).
Miroirs multicouches C/SI a incidence normale pour la region spectrale 25-40 nanometres
NASA Astrophysics Data System (ADS)
Grigonis, Marius
Nous avons propose la nouvelle combinaison de materiaux, C/Si, pour la fabrication de miroirs multicouches a incidence normale dans la region spectrale 25-40 nm. Les resultats experimentaux montrent que cette combinaison possede une reflectivite d'environ ~25% dans la region spectrale 25-33 nm et une reflectivite d'environ ~23% dans la region spectrale 33-40 nm. Ces valeurs de reflectivite sont les plus grandes obtenues jusqu'a maintenant dans la region spectrale 25-40 nm. Les miroirs multicouches ont ete par la suite caracterises par microscopie electronique a transmission, par diverses techniques de diffraction des rayons X et par spectroscopies d'electrons AES et ESCA. La resistance des miroirs aux temperatures elevees a ete egalement etudiee. Les resultats fournis par les methodes de caracterisation indiquent que cette combinaison possede des caracteristiques tres prometteuses pour son application comme miroir pour les rayons X mous.
Realisation et Applications D'un Laser a Fibre a L'erbium Monofrequence
NASA Astrophysics Data System (ADS)
Larose, Robert
L'incorporation d'ions de terres rares a l'interieur de la matrice de verre d'une fibre optique a permis l'emergence de composants amplificateurs tout-fibre. Le but de cette these consiste d'une part a analyser et a modeliser un tel dispositif et d'autre part, a fabriquer puis a caracteriser un amplificateur et un oscillateur a fibre. A l'aide d'une fibre fortement dopee a l'erbium fabriquee sur demande, on realise un laser a fibre syntonisable qui fonctionne en regime multimodes longitudinaux avec une largeur de raie de 1.5 GHz et egalement comme source monofrequencielle de largeur de raie de 70 kHz. Le laser sert ensuite a caracteriser un reseau de Bragg ecrit par photosensibilite dans une fibre optique. La technique de syntonisation permet aussi l'asservissement au fond d'une resonance de l'acetylene. Le laser garde alors la position centrale de la raie avec une erreur de moins de 1 MHz corrigeant ainsi les derives mecaniques de la cavite.
Modelisation frequentielle de la permittivite du beton pour le controle non destructif par georadar
NASA Astrophysics Data System (ADS)
Bourdi, Taoufik
Le georadar (Ground Penetrating Radar (GPR)) constitue une technique de controle non destructif (CND) interessante pour la mesure des epaisseurs des dalles de beton et la caracterisation des fractures, en raison de ses caracteristiques de resolution et de profondeur de penetration. Les equipements georadar sont de plus en plus faciles a utiliser et les logiciels d'interpretation sont en train de devenir plus aisement accessibles. Cependant, il est ressorti dans plusieurs conferences et ateliers sur l'application du georadar en genie civil qu'il fallait poursuivre les recherches, en particulier sur la modelisation et les techniques de mesure des proprietes electriques du beton. En obtenant de meilleures informations sur les proprietes electriques du beton aux frequences du georadar, l'instrumentation et les techniques d'interpretation pourraient etre perfectionnees plus efficacement. Le modele de Jonscher est un modele qui a montre son efficacite dans le domaine geophysique. Pour la premiere fois, son utilisation dans le domaine genie civil est presentee. Dans un premier temps, nous avons valide l'application du modele de Jonscher pour la caracterisation de la permittivite dielectrique du beton. Les resultats ont montre clairement que ce modele est capable de reproduire fidelement la variation de la permittivite de differents types de beton sur la bande de frequence georadar (100 MHz-2 GHz). Dans un deuxieme temps, nous avons montre l'interet du modele de Jonscher en le comparant a d'autres modeles (Debye et Debye-etendu) deja utilises dans le domaine genie civil. Nous avons montre aussi comment le modele de Jonscher peut presenter une aide a la prediction de l'efficacite de blindage et a l'interpretation des ondes de la technique GPR. Il a ete determine que le modele de Jonscher permet de donner une bonne presentation de la variation de la permittivite du beton dans la gamme de frequence georadar consideree. De plus, cette modelisation est valable pour differents types de beton et a differentes teneurs en eau. Dans une derniere partie, nous avons presente l'utilisation du modele de Jonscher pour l'estimation de l'epaisseur d'une dalle de beton par la technique GPR dans le domaine frequentiel. Mots-cles : CND, beton, georadar , permittivite, Jonscher
NASA Astrophysics Data System (ADS)
Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.
2014-12-01
Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.
Adnet, J J; Pinteaux, A; Pousse, G; Caulet, T
1976-04-01
Three simple methods (adapted from optical techniques) for normal and pathological elastic tissue caracterisation in electron microscopy on thin and ultrathin sections are proposed. Two of these methods (orcein and fuchsin resorcin) seem to have a specificity for arterial and breast cancer elastic tissue. Weigert's method gives the best contrast.
Caracterisation pratique des systemes quantiques et memoires quantiques auto-correctrices 2D
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier
Cette these s'attaque a deux problemes majeurs de l'information quantique: - Comment caracteriser efficacement un systeme quantique? - Comment stocker de l'information quantique? Elle se divise done en deux parties distinctes reliees par des elements techniques communs. Chacune est toutefois d'un interet propre et se suffit a elle-meme. Caracterisation pratique des systemes quantiques. Le calcul quantique exige un tres grand controle des systemes quantiques composes de plusieurs particules, par exemple des atomes confines dans un piege electromagnetique ou des electrons dans un dispositif semi-conducteur. Caracteriser un tel systeme quantique consiste a obtenir de l'information sur l'etat grace a des mesures experimentales. Or, chaque mesure sur le systeme quantique le perturbe et doit done etre effectuee apres avoir reprepare le systeme de facon identique. L'information recherchee est ensuite reconstruite numeriquement a partir de l'ensemble des donnees experimentales. Les experiences effectuees jusqu'a present visaient a reconstruire l'etat quantique complet du systeme, en particulier pour demontrer la capacite de preparer des etats intriques, dans lesquels les particules presentent des correlations non-locales. Or, la procedure de tomographie utilisee actuellement n'est envisageable que pour des systemes composes d'un petit nombre de particules. Il est donc urgent de trouver des methodes de caracterisation pour les systemes de grande taille. Dans cette these, nous proposons deux approches theoriques plus ciblees afin de caracteriser un systeme quantique en n'utilisant qu'un effort experimental et numerique raisonnable. - La premiere consiste a estimer la distance entre l'etat realise en laboratoire et l'etat cible que l'experimentateur voulait preparer. Nous presentons un protocole, dit de certification, demandant moins de ressources que la tomographie et tres efficace pour plusieurs classes d'etats importantes pour l'informatique quantique. - La seconde approche, dite de tomographie variationnelle, propose de reconstruire l'etat en restreignant l'espace de recherche a une classe variationnelle plutot qu'a l'immense espace des etats possibles. Un etat variationnel etant decrit par un petit nombre de parametres, un petit nombre d'experiences peut suffire a identifier les parametres variationnels de l'etat experimental. Nous montrons que c'est le cas pour deux classes variationnelles tres utilisees, les etats a produits matriciels (MPS) et l'ansatz pour intrication multi-echelle (MERA). Memoires quantiques auto-correctrices 2D. Une memoire quantique auto-correctrice est un systeme physique preservant de l'information quantique durant une duree de temps macroscopique. Il serait done l'equivalent quantique d'un disque dur ou d'une memoire flash equipant les ordinateurs actuels. Disposer d'un tel dispositif serait d'un grand interet pour l'informatique quantique. Une memoire quantique auto-correctrice est initialisee en preparant un etat fondamental, c'est-a-dire un etat stationnaire de plus basse energie. Afin de stocker de l'information quantique, il faut plusieurs etats fondamentaux distincts, chacun correspondant a une valeur differente de la memoire. Plus precisement, l'espace fondamental doit etre degenere. Dans cette these, on s'interesse a des systemes de particules disposees sur un reseau bidimensionnel (2D), telles les pieces sur un echiquier, qui sont plus faciles a realiser que les systemes 3D. Nous identifions deux criteres pour l'auto-correction: - La memoire quantique doit etre stable face aux perturbations provenant de l'environnement, par exemple l'application d'un champ magnetique externe. Ceci nous amene a considerer les systemes topologiques 2D dont les degres de liberte sont intrinsequement robustes aux perturbations locales de l'environnement. - La memoire quantique doit etre robuste face a un environnement thermique. Il faut s'assurer que les excitations thermiques n'amenent pas deux etats fondamentaux distincts vers le meme etat excite, sinon l'information aura ete perdue. Notre resultat principal montre qu'aucun systeme topologique 2D n'est auto-correcteur: l'environnement peut changer l'etat fondamental en deplacant aleatoirement de petits paquets d'energie, un mecanisme coherent avec l'intuition que tout systeme topologique admet des excitations localisees ou quasiparticules. L'interet de ce resultat est double. D'une part, il oriente la recherche d'un systeme auto-correcteur en montrant qu'il doit soit (i) etre tridimensionnel, ce qui est difficile a realiser experimentalement, soit (ii) etre base sur des mecanismes de protection nouveaux, allant au-dela des considerations energetiques. D'autre part, ce resultat constitue un premier pas vers la demonstration formelle de l'existence de quasiparticules pour tout systeme topologique.
NASA Astrophysics Data System (ADS)
Paradis, Alexandre
The principal objective of the present thesis is to elaborate a computational model describing the mechanical properties of NiTi under different loading conditions. Secondary objectives are to build an experimental database of NiTi under stress, strain and temperature in order to validate the versatility of the new model proposed herewith. The simulation model used presently at Laboratoire sur les Alliage a Memoire et les Systemes Intelligents (LAMSI) of ETS is showing good behaviour in quasi-static loading. However, dynamic loading with the same model do not allows one to include degradation. The goal of the present thesis is to build a model capable of describing such degradation in a relatively accurate manner. Some experimental testing and results will be presented. In particular, new results on the behaviour of NiTi being paused during cycling are presented in chapter 2. A model is developed in chapter 3 based on Likhachev's micromechanical model. Good agreement is found with experimental data. Finally, an adaptation of the model is presented in chapter 4, allowing it to be eventually implemented into a finite-element commercial software.
Modelling and Caracterisation of sea salt aerosols during ChArMEx-ADRIMED campaign in Ersa
NASA Astrophysics Data System (ADS)
Claeys, Marine; Roberts, Greg; Mallet, Marc; Sciare, Jean; Arndt, Jovanna; Mihalopoulos, Nikos
2015-04-01
During ChArMEx-ADRIMED campaign (June and July 2013), aerosol particles measurements were conducted in Ersa (600 m asl), Cap Corsica. The in-situ instrumentation allowed to characterize sea salt aerosols (SSA) by their physico-chemical and optical properties and their size distribution. This study concentrates particularly on a period of a few days where the concentration of sea salt aerosols was higher. The chemistry results indicate that the SSA measured during this period were mostly aged. The comparison of the number size distributions of air masses allow to determine the SSA size mode. These data are used to evaluate the sea salt aerosol emission scheme implemented in the regional scale Meso-Nh model. A new emission scheme based on available source fonctions is tested for different sea state conditions to evaluate the direct radiative impact of sea salt aerosols over the Mediterranean basin.
Characterising the Ionosphere (La caracterisation de l’ionosphere)
2009-01-01
and these emissions are characteristic for proton precipitation. The hydrogen that is produced by charge exchange collisions has the kinetic energy ...the same kinetic energy as the original proton had, but does not gyrate around the magnetic field. The precipitation therefore spreads horizontally...latitudinal extend of the D-region ionization [Rodger et al., 2006]. Depending on their energy , these energetic protons also penetrate into the middle
Etude de l'amelioration de la qualite des anodes par la modification des proprietes du brai
NASA Astrophysics Data System (ADS)
Bureau, Julie
La qualite des anodes produites se doit d'etre bonne afin d'obtenir de l'aluminium primaire tout en reduisant le cout de production du metal, la consommation d'energie et les emissions environnementales. Or, l'obtention des proprietes finales de l'anode necessite une liaison satisfaisante entre le coke et le brai. Toutefois, la matiere premiere actuelle n'assure pas forcement la compatibilite entre le coke et le brai. Une des solutions les plus prometteuses, pour ameliorer la cohesion entre ces deux materiaux, est la modification des proprietes du brai. L'objectif de ce travail consiste a modifier les proprietes du brai par l'ajout d'additifs chimiques afin d'ameliorer la mouillabilite du coke par le brai modifie pour produire des anodes de meilleure qualite. La composition chimique du brai est modifiee en utilisant des tensioactifs ou agents de modification de surface choisis dans le but d'enrichir les groupements fonctionnels susceptibles d'ameliorer la mouillabilite. L'aspect economique, l'empreinte environnementale et l'impact sur la production sont consideres dans la selection des additifs chimiques. Afin de realiser ce travail, la methodologie consiste a d'abord caracteriser les brais non modifies, les additifs chimiques et les cokes par la spectroscopie infrarouge a transformee de Fourier (FTIR) afin d'identifier les groupements chimiques presents. Puis, les brais sont modifies en ajoutant un additif chimique afin de possiblement modifier ses proprietes. Differentes quantites d'additif sont ajoutees afin d'examiner l'effet de la variation de la concentration sur les proprietes du brai modifie. La methode FTIR permet d'evaluer la composition chimique des brais modifies afin de constater si l'augmentation de la concentration d'additif enrichit les groupements fonctionnels favorisant l'adhesion coke/brai. Ensuite, la mouillabilite du coke par le brai est observee par la methode goutte- sessile. Une amelioration de la mouillabilite par la modification a l'aide d'un additif chimique signifie une possible amelioration de l'interaction entre le coke et le brai modifie. Afin de completer l'evaluation des donnees recueillies, les resultats de la FTIR et de la mouillabilite sont analyses par le reseau neuronal artificiel afin de mieux comprendre les mecanismes sous-jacents. A la lumiere des resultats obtenus, les additifs chimiques les plus prometteurs sont selectionnes afin de verifier l'effet de leur utilisation sur la qualite des anodes. Pour ce faire, des anodes de laboratoire sont produites en utilisant des brais non modifies et des brais modifies avec les additifs chimiques selectionnes. Par la suite, les anodes sont carottees afin de les caracteriser en determinant certaines de leurs proprietes physiques et chimiques. Enfin, les resultats des echantillons d'anodes faites d'un meme brai non modifie et modifie sont compares afin d'evaluer l'amelioration de la qualite des anodes. Finalement, un examen de l'impact possible de l'utilisation d'un additif chimique pour modifier le brai sur la consommation energetique et en carbone ainsi que la quantite d'aluminium produit est realise. Afin de modifier le brai, trois differents additifs chimiques sont selectionnes, soit un tensioactif et deux agents de modification de surface. L'analyse FTIR des experimentations menees sur les brais modifies demontre que deux additifs ont modifie la composition chimique des brais experimentes. L'analyse des resultats des tests goutte-sessile laisse supposer qu'un brai modifie par ces deux additifs ameliore possiblement l'interaction avec les cokes employes dans cette etude. L'analyse par reseau neuronal artificiel des donnees recueillies permet de mieux comprendre le lien entre la composition chimique d'un brai et sa capacite de mouillabilite avec un coke. La caracterisation des echantillons d'anodes produites permet d'affirmer que ces deux additifs peuvent ameliorer certaines des proprietes anodiques comparativement aux echantillons standards. L'analyse des resultats demontre que l'un des deux additifs semble donner des resultats plus prometteurs. Au final, les travaux realises au cours de ce projet demontrent qu'il est possible d'ameliorer la qualite anodique en modifiant les proprietes du brai. De plus, l'analyse des resultats obtenus fournit une meilleure comprehension des mecanismes entre un brai et un additif chimique.
Hydrodynamic caracterisation of an heterogeneous aquifer system under semi-arid climate
NASA Astrophysics Data System (ADS)
Drias, T.; Toubal, A. Ch
2009-04-01
The studied zone is a part of the Mellegne's (North-East of Algeria) under pound, this zone is characterised by its semi-arid climate. The water bearing system is formed by the plio-quaternairy alluviums resting on a marley substratuim of age Eocene. The geostatiscitcs approach of the hydrodynamics parameters (Hydrolic load, transmisivity) allowed the study of their spatial distrubution (casting) by the method of Krigeage by blocks and the identification of zones with water-bearing potentialities. In this respect, the zone of Ain Chabro which, is situated in the South of the plain shows the best values of the transmisivity...... The use of a bidimensinnel model in the differences ended in the permanent regime allowed us to establish the global balence sheet (overall assessment) of the tablecloth and to refine the transmisivity field. These would vary more exactley between 10-4 to 10-2 m²/s. The method associating the probability appraoch of Krigeage to that determining the model has facilited the wedging of the model and clarified the inflitration value. Keys words: hydrodynamics, geostatiscitcs, Modeling, Chabro, Tébessa.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
Optimization of Laminated Composite Plates
1989-09-01
plane loads has already been studied, and a number of technical publications and software packages can be found. In the present report, an optimization of...described above. There is no difficulty in any case, and comercial software , from personal computers to macro- systems, is available. In the chapter...Reforzado y su Aplicacion a los Medios de Transporte", Ph.D. University of Zaragoza, Spain, 1984. 77. Miravete A., "Caracterisation et mise au Point d’un
Modelisation des emissions de particules microniques et nanometriques en usinage
NASA Astrophysics Data System (ADS)
Khettabi, Riad
La mise en forme des pieces par usinage emet des particules, de tailles microscopiques et nanometriques, qui peuvent etre dangereuses pour la sante. Le but de ce travail est d'etudier les emissions de ces particules pour fins de prevention et reduction a la source. L'approche retenue est experimentale et theorique, aux deux echelles microscopique et macroscopique. Le travail commence par des essais permettant de determiner les influences du materiau, de l'outil et des parametres d'usinage sur les emissions de particules. E nsuite un nouveau parametre caracterisant les emissions, nomme Dust unit , est developpe et un modele predictif est propose. Ce modele est base sur une nouvelle theorie hybride qui integre les approches energetiques, tribologiques et deformation plastique, et inclut la geometrie de l'outil, les proprietes du materiau, les conditions de coupe et la segmentation des copeaux. Il ete valide au tournage sur quatre materiaux: A16061-T6, AISI1018, AISI4140 et fonte grise.
NASA Astrophysics Data System (ADS)
Lebel, Larry
Une procedure experimentale a ete developpee pour caracteriser les mecanismes de deterioration et la durabilite de materiaux composites a matrice ceramique (CMC) dans une application de piece statique de turbine a gaz. Tandis que la plupart des essais de caracterisation publies sur les materiaux CMC ont ete realises sous des conditions de chargement controle, la presente recherche tente de reproduire la relaxation des contraintes qui se produit normalement dans une piece statique a haute temperature. Dans l'experience proposee, un echantillon planaire de forme haltere est chauffe de facon cyclique sur une de ses faces et refroidi sur l'autre, tout en etant contraint dans ses deplacements. La contrainte de flexion resultante au centre de l'echantillon, mesuree par une cellule de charge, correspond a la contrainte de flexion qui a ete prealablement predite au centre des panneaux d'une chambre a combustion generique. Un materiau CMC multicouche compose d'une matrice d'alumine poreuse et de fibres NextelMD 720 a ete utilise pour developper l'experience. Des essais de calibration ont d'abord ete realises en utilisant un systeme de chauffage par lampe infrarouge, atteignant jusqu'a 1160 °C a la surface de l'echantillon. Un systeme laser au CO2 a par la suite ete utilise pour realiser des essais de deterioration a haute puissance, atteignant en fin d'essai des temperatures de surface excedant la limite de 1200 °C du materiau et des differences de temperature a travers l'epaisseur de plus de 1000 °C. Sous la puissance de chauffage imposee a amplitude constante, l'accumulation de dommage a fait en sorte d'augmenter la temperature en surface et les gradients de temperature a travers le materiau. Une reduction de la contrainte dans le temps a ete observee a cause du fluage, de la fissuration et de la delamination du materiau sous la condition de confinement du deplacement, menant a une stabilisation du niveau de dommage a une certaine profondeur dependant de la contrainte thermique initiale. La procedure de caracterisation developpee s'avere etre un outil prometteur pour developper de nouveaux types de materiaux, de meme que pour comparer la durabilite de materiaux existants sous des conditions representatives de pieces statiques de turbine a gaz. None
Vers des boites quantiques a base de graphene
NASA Astrophysics Data System (ADS)
Branchaud, Simon
Le graphene est un materiau a base de carbone qui est etudie largement depuis 2004. De tres nombreux articles ont ete publies tant sur les proprietes electroniques, qu'optiques ou mecaniques de ce materiel. Cet ouvrage porte sur l'etude des fluctuations de conductance dans le graphene, et sur la fabrication et la caracterisation de nanostructures gravees dans des feuilles de ce cristal 2D. Des mesures de magnetoresistance a basse temperature ont ete faites pres du point de neutralite de charge (PNC) ainsi qu'a haute densite electronique. On trouve deux origines aux fluctuations de conductance pres du PNC, soit des oscillations mesoscopiques provenant de l'interference quantique, et des fluctuations dites Hall quantique apparaissant a plus haut champ (>0.5T), semblant suivre les facteurs de remplissage associes aux monocouches de graphene. Ces dernieres fluctuations sont attribuees a la charge d'etats localises, et revelent un precurseur a l'effet Hall quantique, qui lui, ne se manifeste pas avant 2T. On arrive a extraire les parametres caracterisant l'echantillon a partir de ces donnees. A la fin de cet ouvrage, on effectue des mesures de transport dans des constrictions et ilots de graphene, ou des boites quantiques sont formees. A partir de ces mesures, on extrait les parametres importants de ces boites quantiques, comme leur taille et leur energie de charge.
Radar response to crop residue cover and tillage application on postharvest agricultural surfaces
NASA Astrophysics Data System (ADS)
McNairn, Heather
Les informations sur les pratiques de conservation des sols comme le labourage et la gestion des residus de culture sont requises afin d'estimer avec exactitude les risques d'erosion des sols. Quoique les micro-ondes soient sensibles aux conditions d'humidite et aux proprietes geometriques des surfaces, il n'en demeure pas moins que l'on connait encore peu sur la sensibilite des micro-ondes polarisees lineaires ou des parametres polarimetriques du ROS en fonction des caracteristiques des residus. A partir de donnees prises a l'aide d'un diffusometre monte sur un camion en 1996 et lors d'une mission SIR-C menee en 1994, cette recherche a demontre que les micro-ondes sont sensibles a la fois a la quantite et au type de couverture de residus, de meme qu'a la teneur en eau des residus. La reponse des polarisations croisees lineaires et de plusieurs parametres polarimetriques, incluant la hauteur pedestre, a permis d'observer qu'une diffusion volumique importante avait lieu en presence de vegetation senescente qui se tenait debout et pour les champs non laboures. La diffusion de surface dominait cependant pour les champs avec de faibles quantites de residus et des residus plus fins. La recherche a toutefois demontre que des conditions de surface complexes etaient crees par differentes combinaisons de residus et de pratiques de labourage. Par consequent, il faudra attendre que des donnees multi-polarisees ou polarimetriques soient acquises par les capteurs prevus a bord du satellite canadien RADARSAT-2 et du satellite ENVISAT de l'Agence spatiale europeenne avant de pouvoir completement caracteriser les champs apres la recolte.
NASA Astrophysics Data System (ADS)
Fournier, Marie-Claude
Une caracterisation des emissions atmospheriques provenant des sources fixes en operation, alimentees au gaz et a l'huile legere, a ete conduite aux installations visees des sites no.1 et no.2. La caracterisation et les calculs theoriques des emissions atmospheriques aux installations des sites no.1 et no.2 presentent des resultats qui sont en dessous des valeurs reglementaires pour des conditions d'operation normales en periode hivernale et par consequent, a de plus fortes demandes energetiques. Ainsi, pour une demande energetique plus basse, le taux de contaminants dans les emissions atmospheriques pourrait egalement etre en dessous des reglementations municipales et provinciales en vigueur. Dans la perspective d'une nouvelle reglementation provinciale, dont les termes sont discutes depuis 2005, il serait souhaitable que le proprietaire des infrastructures visees participe aux echanges avec le Ministere du Developpement Durable, de l'Environnement et des Parcs (MDDEP) du Quebec. En effet, meme si le principe de droit acquis permettrait d'eviter d'etre assujetti a la nouvelle reglementation, l'application de ce type de principe ne s'inscrit pas dans ceux d'un developpement durable. L'âge avance des installations etudiees implique la planification d'un entretien rigoureux afin d'assurer les conditions optimales de combustion en fonction du type de combustible. Des tests de combustion sur une base reguliere sont donc recommandes. Afin de supporter le processus de suivi et d'evaluation de la performance environnementale des sources fixes, un outil d'aide a la gestion de l'information environnementale a ete developpe. Dans ce contexte, la poursuite du developpement d'un outil d'aide a la gestion de l'information environnementale faciliterait non seulement le travail des personnes affectees aux inventaires annuels mais egalement le processus de communication entre les differents acteurs concernes tant intra- qu'inter-etablissement. Cet outil serait egalement un bon moyen pour sensibiliser le personnel a leur consommation energetique ainsi qu'a leur role dans la lutte contre les emissions polluantes et les gaz a effets de serre. En outre, ce type d'outil a pour principale fonction de generer des rapports dynamiques pouvant s'adapter a des besoins precis. Le decoupage coherent de l'information associe a un developpement par modules offre la perspective d'application de l'outil pour d'autres types d'activites. Dans ce cas, il s'agit de definir la part commune avec les modules existants et planifier les activites de developpement specifiques selon la meme demarche que celle presentee dans le present document.
NASA Astrophysics Data System (ADS)
Floquet, Jimmy
Dans les cuves d'electrolyse d'aluminium, le milieu de reaction tres corrosif attaque les parois de la cuve, ce qui diminue leur duree de vie et augmente les couts de production. Le talus, qui se forme sous l'effet des pertes de chaleur qui maintiennent un equilibre thermique dans la cuve, sert de protection naturelle a la cuve. Son epaisseur doit etre controlee pour maximiser cet effet. Advenant la resorption non voulue de ce talus, les degats generes peuvent s'evaluer a plusieurs centaines de milliers de dollars par cuve. Aussi, l'objectif est de developper une mesure ultrasonore de l'epaisseur du talus, car elle serait non intrusive et non destructive. La precision attendue est de l'ordre du centimetre pour des mesures d'epaisseurs comprenant 2 materiaux, allant de 5 a 20 cm. Cette precision est le facteur cle permettant aux industriels de controler l'epaisseur du talus de maniere efficace (maximiser la protection des parois tout en maximisant l'efficacite energetique du procede), par l'ajout d'un flux thermique. Cependant, l'efficacite d'une mesure ultrasonore dans cet environnement hostile reste a demontrer. Les travaux preliminaires ont permis de selectionner un transducteur ultrasonore a contact ayant la capacite a resister aux conditions de mesure (hautes temperatures, materiaux non caracterises...). Differentes mesures a froid (traite par analyse temps-frequence) ont permis d'evaluer la vitesse de propagation des ondes dans le materiau de la cuve en graphite et de la cryolite, demontrant la possibilite d'extraire l'information pertinente d'epaisseur du talus in fine. Fort de cette phase de caracterisation des materiaux sur la reponse acoustique des materiaux, les travaux a venir ont ete realises sur un modele reduit de la cuve. Le montage experimental, un four evoluant a 1050 °C, instrumente d'une multitude de capteurs thermique, permettra une comparaison de la mesure intrusive LVDT a celle du transducteur, dans des conditions proches de la mesure industrielle. Mots-cles : Ultrasons, CND, Haute temperature, Aluminium, Cuve d'electrolyse.
NASA Astrophysics Data System (ADS)
Fareh, Fouad
Le moulage par injection basse pression des poudres metalliques est une technique de fabrication qui permet de fabriquer des pieces possedant la complexite des pieces coulees mais avec les proprietes mecaniques des pieces corroyees. Cependant, l'optimisation des etapes de deliantage et de frittage a ete jusqu'a maintenant effectuee a l'aide de melange pour lesquels la moulabilite optimale n'a pas encore ete demontree. Ainsi, la comprehension des proprietes rheologiques et de la segregation des melanges est tres limitee et cela presente le point faible du processus de LPIM. L'objectif de ce projet de recherche etait de caracteriser l'influence des liants sur le comportement rheologique des melanges en mesurant la viscosite et la segregation des melanges faible viscosite utilises dans le procede LPIM. Afin d'atteindre cet objectif, des essais rheologiques et thermogravimetriques ont ete conduits sur 12 melanges. Ces melanges ont ete prepares a base de poudre d'Inconel 718 de forme spherique (chargement solide constant a 60%) et de cires, d'agents surfactants ou epaississants. Les essais rheologiques ont ete utilises entre autre pour calculer l'indice d'injectabilite ?STV des melanges, tandis que les essais thermogravimetriques ont permis d'evaluer precisement la segregation des poudres dans les melanges. Il a ete demontre que les trois (3) melanges contenant de la cire de paraffine et de l'acide stearique presentent des indices alpha STV plus eleves qui sont avantageux pour le moulage par injection des poudres metalliques (MIM), mais segregent beaucoup trop pour que la piece fabriquee produise de bonnes caracteristiques mecaniques. A l'oppose, le melange contenant de la cire de paraffine et de l'ethylene-vinyle acetate ainsi que le melange contenant seulement de la cire de carnauba segregent peu voire pas du tout, mais possedent de tres faibles indices alphaSTV : ils sont donc difficilement injectables. Le meilleur compromis semble donc etre les melanges contenant de la cire (de paraffine, d'abeille et de carnauba) et de faible teneur en acide stearique et en ethylene-vinyle acetate. Par ailleurs, les lois physiques preexistantes ont permis de confirmer les resultats des essais rheologiques et thermogravimetriques, mais aussi de mettre en evidence l'influence de la segregation sur les proprietes rheologiques des melanges. Ces essais ont aussi montre l'effet de constituants de liant et du temps passe a l'etat fondu sur l'intensite de la segregation dans les melanges. Les melanges contenants de l'acide stearique segregent rapidement. La caracterisation des melanges developpes pour le moulage basse pression des poudres metalliques doit etre obtenue a l'aide d'une methode de courte duree pour eviter la segregation et de mesurer precisement l'aptitude a l'ecoulement de ces melanges.
Caracterisation mecanique dynamique de materiaux poro-visco-elastiques
NASA Astrophysics Data System (ADS)
Renault, Amelie
Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.
Carbon nano structures: Production and characterization
NASA Astrophysics Data System (ADS)
Beig Agha, Rosa
L'objectif de ce memoire est de preparer et de caracteriser des nanostructures de carbone (CNS -- Carbon Nanostructures, en licence a l'Institut de recherche sur l'hydrogene, Quebec, Canada), un carbone avec un plus grand degre de graphitisation et une meilleure porosite. Le Chapitre 1 est une description generale des PEMFCs (PEMFC -- Polymer Electrolyte Membrane Fuel Cell) et plus particulierement des CNS comme support de catalyseurs, leur synthese et purification. Le Chapitre 2 decrit plus en details la methode de synthese et la purification des CNS, la theorie de formation des nanostructures et les differentes techniques de caracterisation que nous avons utilises telles que la diffraction aux rayons-X (XRD -- X-ray diffraction), la microscopie electronique a transmission (TEM -- transmission electron microscope ), la spectroscopie Raman, les isothermes d'adsorption d'azote a 77 K (analyse BET, t-plot, DFT), l'intrusion au mercure, et l'analyse thermogravimetrique (TGA -- thermogravimetric analysis). Le Chapitre 3 presente les resultats obtenus a chaque etape de la synthese des CNS et avec des echantillons produits a l'aide d'un broyeur de type SPEXRTM (SPEX/CertiPrep 8000D) et d'un broyeur de type planetaire (Fritsch Pulverisette 5). La difference essentielle entre ces deux types de broyeur est la facon avec laquelle les materiaux sont broyes. Le broyeur de type SPEX secoue le creuset contenant les materiaux et des billes d'acier selon 3 axes produisant ainsi des impacts de tres grande energie. Le broyeur planetaire quant a lui fait tourner et deplace le creuset contenant les materiaux et des billes d'acier selon 2 axes (plan). Les materiaux sont donc broyes differemment et l'objectif est de voir si les CNS produits ont les memes structures et proprietes. Lors de nos travaux nous avons ete confrontes a un probleme majeur. Nous n'arrivions pas a reproduire les CNS dont la methode de synthese a originellement ete developpee dans les laboratoires de l'Institut de recherche sur l'hydrogene (IRH). Nos echantillons presentaient toujours une grande quantite de carbure de fer au detriment de la formation de nanostructures de carbone. Apres plusieurs mois de recherche nous avons constate que les metaux de base, soit le fer et le cobalt, etaient contamines. Neanmoins, ces recherches nous ont enseigne beaucoup et les resultats sont presentes aux Appendices I a III. Le carbone de depart est du charbon active commercial (CNS201) qui a ete prealablement chauffe a 1,000°C sous vide pendant 90 minutes pour se debarrasser de toute humidite et autres impuretes. En premiere etape, dans un creuset d'acier durci du CNS201 pretraite fut melange a une certaine quantite de Fe et de Co (99.9 % purs). Des proportions typiques sont 50 pd. %, 44 pd. %, et 6 pd. % pour le C, le Fe, et le Co respectivement. Pour les echantillons prepares avec le broyeur SPEX, trois a six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 35 a 1. Pour les echantillons prepares avec le broyeur planetaire, trente-six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 10 a 1. L'hydrogene fut alors introduit dans le creuset pour les deux types de broyeur a une pression de 1.4 MPa, et l'echantillon fut broye pendant 12 h pour le SPEX et 24 h pour le planetaire. Le broyeur SPEX a un rendement de transfert d'energie mecanique plus grand qu'un broyeur planetaire, mais il a le desavantage de contaminer davantage l'echantillon en Fe par attrition. Cependant, ceci peut etre neglige vu que le Fe etait un des catalyseurs metalliques ajoutes au creuset. En deuxieme etape, l'echantillon broye est transfere sous gaz inerte (argon) dans un tube en quartz, qui est alors chauffe a 700°C pendant 90 minutes. Des mesures de patrons de diffraction a rayons-X sur poudre furent faites pour caracteriser les changements structurels des CNS lors des etapes de synthese. Ces mesures furent prises avec un diffractometre Bruker D8 FOCUS utilisant le rayonnement Cu Ka (lambda = 1.54054 A) et une geometrie Theta/2Theta. La Figure 3.1 montre le patron de diffraction de rayon-X du charbon active utilise comme precurseur pour produire les CNS. Le charbon active est prechauffe a haute temperature (1,000°C) pendant 1 h pour enlever l'humidite. La Figure 3.2 montre les patrons de diffraction de rayons-X des echantillons SPEX et planetaire apres broyage de 12 h et 24 h, respectivement. Les structures de charbon ne sont pas encore bien definies, mais un pic a 2theta ≈ 20°-30° correspond aux petites cristallites a caractere turbostatique et un pic correspondant au fer et au carbure de fer apparait a 2theta ≈ 45°. (Abstract shortened by UMI.)
Etude de l'affaiblissement du comportement mecanique du pergelisol du au rechauffement climatique
NASA Astrophysics Data System (ADS)
Buteau, Sylvie
Le rechauffement climatique predit pour les prochaines decennies, aura des impacts majeurs sur le pergelisol qui sont tres peu documentes pour l'instant. La presente etude a pour but d'evaluer ces impacts sur les proprietes mecaniques du pergelisol et sa stabilite a long terme. Une nouvelle technique d'essai de penetration au cone a taux de deformation controle, a ete developpee pour caracteriser en place le pergelisol. Ces essais geotechniques et la mesure de differentes proprietes physiques ont ete effectues sur une butte de pergelisol au cours du printemps 2000. Le developpement et l'utilisation d'un modele geothermique 1D tenant compte de la thermodependance du comportement mecanique ont permis d'evaluer que les etendues de pergelisol chaud deviendraient instables a la suite d'un rechauffement de l'ordre de 5°C sur cent ans. En effet, la resistance mecanique du pergelisol diminuera alors rapidement jusqu'a 11,6 MPa, ce qui correspond a une perte relative de 98% de la resistance par rapport a un scenario sans rechauffement.
Methodes de caracterisation des proprietes thermomecaniques d'un acier martensitique =
NASA Astrophysics Data System (ADS)
Ausseil, Lucas
Le but de l'etude est de developper des methodes permettant de mesurer les proprietes thermomecaniques d'un acier martensitique lors de chauffe rapide. Ces donnees permettent d'alimenter les modeles d'elements finis existant avec des donnees experimentales. Pour cela, l'acier 4340 est utilise. Cet acier est notamment utilise dans les roues d'engrenage, il a des proprietes mecaniques tres interessantes. Il est possible de modifier ses proprietes grâce a des traitements thermiques. Le simulateur thermomecanique Gleeble 3800 est utilise. Il permet de tester theoriquement toutes les conditions presentes dans les procedes de fabrication. Avec les tests de dilatation realises dans ce projet, les temperatures exactes de changement de phases austenitiques et martensitiques sont obtenues. Des tests de traction ont aussi permis de deduire la limite d'elasticite du materiau dans le domaine austenitique allant de 850 °C a 1100 °C. L'effet des deformations sur la temperature de debut de transformation est montre qualitativement. Une simulation numerique est aussi realisee pour comprendre les phenomenes intervenant pendant les essais.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesseyre, Y.
The study allowed development of an original measuring system for mobility, involving simultaneously a repulsive electrical field and a continuous gas flow. It made it possible to define a model to calculate ionic transparency of grates, taking into account electrical fields below and above them, ion mobility, speed of gas flow and geometric transparency. Calculation of the electrical field proceeded in a plane-plane system, taking into account the space load and diffusion; a graphic method was developed to determine the field, thus avoiding numerical integration of the diffusion equation. The tracings of the mobility spectra obtained in different gases mademore » it possible to determine characteristic discrete mobility values comparable to those observed by other more sophisticated systems for measuring mobilities, such as the flight time systems. Detection of pollutants in weak concentration in dry air was shown. However, the presence of water vapor in the air forms agglomerates around the ions formed, reducing resolution of the system and making it less applicable under normal atmospheric conditions.« less
Assessment of Infrared Sounder Radiometric Noise from Analysis of Spectral Residuals
NASA Astrophysics Data System (ADS)
Dufour, E.; Klonecki, A.; Standfuss, C.; Tournier, B.; Serio, C.; Masiello, G.; Tjemkes, S.; Stuhlmann, R.
2016-08-01
For the preparation and performance monitoring of the future generation of hyperspectral InfraRed sounders dedicated to the precise vertical profiling of the atmospheric state, such as the Meteosat Third Generation hyperspectral InfraRed Sounder, a reliable assessment of the instrument radiometric error covariance matrix is needed.Ideally, an inflight estimation of the radiometrric noise is recommended as certain sources of noise can be driven by the spectral signature of the observed Earth/ atmosphere radiance. Also, unknown correlated noise sources, generally related to incomplete knowledge of the instrument state, can be present, so a caracterisation of the noise spectral correlation is also neeed.A methodology, relying on the analysis of post-retreival spectral residuals, is designed and implemented to derive in-flight the covariance matrix on the basis of Earth scenes measurements. This methodology is successfully demonstrated using IASI observations as MTG-IRS proxy data and made it possible to highlight anticipated correlation structures explained by apodization and micro-vibration effects (ghost). This analysis is corroborated by a parallel estimation based on an IASI black body measurement dataset and the results of an independent micro-vibration model.
NASA Astrophysics Data System (ADS)
Buat, V.; Heinis, S.; Boquien, M.
2013-11-01
We report on our recent works on the UV-to-IR SED fitting of a sample of distant (z>1) galaxies observed by Herschel in the CDFS as part of the GOODS-Herschel project. Combining stellar and dust emission in galaxies is found powerful to constrain their dust attenuation as well as their star formation activity. We focus on the caracterisation of dust attenuation and on the uncertainties on the derivation of the star formation rates and stellar masses, as a function of the range of wavelengths sampled by the data data and of the assumptions made on the star formation histories
Fluctuations quantiques et instabilites structurales dans les conducteurs a basse dimensionalite
NASA Astrophysics Data System (ADS)
Dikande, Alain Moise
Un engouement particulier s'est manifeste ces dernieres annees pour les systemes electroniques fortement correles, ce en rapport avec l'immense richesse de leurs proprietes physiques. En general, ces proprietes sont induites par la presence d'interactions entre electrons qui, combinees a la structure du reseau moleculaire, donnent parfois lieu a une tres grande variete de phases electroniques et structurales ayant des incidences directes sur les phenomenes de transport dans ces materiaux. Les systemes electroniques couples a un reseau moleculaire et designes systemes electron-phonon font partie de cette classe de materiaux qui ont recemment capte l'attention, en raison notamment de la competition entre plusieurs echelles d'energie dans un environnement caracterise par une forte anisotropie cristalline et une dynamique moleculaire assez importante. En effet, en plus des proprietes electroniques et structurales particulieres la dimensionalite de ces systemes contribue egalement a leur richesse. Ainsi, une tres forte anisotropie structurale peut rehausser de facon considerable l'importance des interactions entre electrons et entre molecules constituant le reseau au point ou la physique du systeme soit regie par de tres fortes fluctuations. Ce dernier contexte est devenu un domaine a part de la physique des systemes fortement correles, a savoir celui des les phenomenes critiques quantiques . Parmi les systemes electron-phonon, on retrouve les composes inorganique KCP et organique TTF-TCNQ decouverts durant les annees 70, et explores en profondeur a cause de leur tendance vers une instabilite du type onde de densite de charge a basse temperature. Ces composes, en general designes systemes de Peierls en reference a l'instabilite de leurs structures electroniques regie par le reseau moleculaire, ont recemment connu un regain d'interet a la lumiere des nouveaux developpements dans les techniques de caracterisation des structures electroniques ainsi que sur le plan de concepts tel le Liquide de Luttinger, propres aux systemes electroniques a une dimension. (Abstract shortened by UMI.)
Modelisation par elements finis du muscle strie
NASA Astrophysics Data System (ADS)
Leonard, Mathieu
Ce present projet de recherche a permis. de creer un modele par elements finis du muscle strie humain dans le but d'etudier les mecanismes engendrant les lesions musculaires traumatiques. Ce modele constitue une plate-forme numerique capable de discerner l'influence des proprietes mecaniques des fascias et de la cellule musculaire sur le comportement dynamique du muscle lors d'une contraction excentrique, notamment le module de Young et le module de cisaillement de la couche de tissu conjonctif, l'orientation des fibres de collagene de cette membrane et le coefficient de poisson du muscle. La caracterisation experimentale in vitro de ces parametres pour des vitesses de deformation elevees a partir de muscles stries humains actifs est essentielle pour l'etude de lesions musculaires traumatiques. Le modele numerique developpe est capable de modeliser la contraction musculaire comme une transition de phase de la cellule musculaire par un changement de raideur et de volume a l'aide des lois de comportement de materiau predefinies dans le logiciel LS-DYNA (v971, Livermore Software Technology Corporation, Livermore, CA, USA). Le present projet de recherche introduit donc un phenomene physiologique qui pourrait expliquer des blessures musculaires courantes (crampes, courbatures, claquages, etc.), mais aussi des maladies ou desordres touchant le tissu conjonctif comme les collagenoses et la dystrophie musculaire. La predominance de blessures musculaires lors de contractions excentriques est egalement exposee. Le modele developpe dans ce projet de recherche met ainsi a l'avant-scene le concept de transition de phase ouvrant la porte au developpement de nouvelles technologies pour l'activation musculaire chez les personnes atteintes de paraplegie ou de muscles artificiels compacts pour l'elaboration de protheses ou d'exosquelettes. Mots-cles Muscle strie, lesion musculaire, fascia, contraction excentrique, modele par elements finis, transition de phase
Fabrication de memoire monoelectronique non volatile par une approche de nanogrille flottante
NASA Astrophysics Data System (ADS)
Guilmain, Marc
Les transistors monoelectroniques (SET) sont des dispositifs de tailles nanometriques qui permettent la commande d'un electron a la fois et donc, qui consomment peu d'energie. Une des applications complementaires des SET qui attire l'attention est son utilisation dans des circuits de memoire. Une memoire monoelectronique (SEM) non volatile a le potentiel d'operer a des frequences de l'ordre des gigahertz ce qui lui permettrait de remplacer en meme temps les memoires mortes de type FLASH et les memoires vives de type DRAM. Une puce SEM permettrait donc ultimement la reunification des deux grands types de memoire au sein des ordinateurs. Cette these porte sur la fabrication de memoires monoelectroniques non volatiles. Le procede de fabrication propose repose sur le procede nanodamascene developpe par C. Dubuc et al. a 1'Universite de Sherbrooke. L'un des avantages de ce procede est sa compatibilite avec le back-end-of-line (BEOL) des circuits CMOS. Ce procede a le potentiel de fabriquer plusieurs couches de circuits memoirestres denses au-dessus de tranches CMOS. Ce document presente, entre autres, la realisation d'un simulateur de memoires monoelectroniques ainsi que les resultats de simulations de differentes structures. L'optimisation du procede de fabrication de dispositifs monoelectroniques et la realisation de differentes architectures de SEM simples sont traitees. Les optimisations ont ete faites a plusieurs niveaux : l'electrolithographie, la gravure de l'oxyde, le soulevement du titane, la metallisation et la planarisation CMP. La caracterisation electrique a permis d'etudier en profondeur les dispositifs formes de jonction de Ti/TiO2 et elle a demontre que ces materiaux ne sont pas appropries. Par contre, un SET forme de jonction de TiN/Al2O3 a ete fabrique et caracterise avec succes a basse temperature. Cette demonstration demontre le potentiel du procede de fabrication et de la deposition de couche atomique (ALD) pour la fabrication de memoires monoelectroniques. Mots-cles: Transistor monoelectronique (SET), memoire monoelectronique (SEM), jonction tunnel, temps de retention, nanofabrication, electrolithographie, planarisation chimicomecanique.
1992-05-01
gvfl.nds. where [ý nS.VUJ denotes the jump in the F quantity • ns.VU across a triangle side S where b and i are defined by (1.11) with (in the interior of...error estimate for (1.12) recalling that i = (2.1b) u = 0o n P ,x I, max(C 2 hR(U)/ I VU I , h3/2): (2.Ic) u(.,0) u in le, ’ Theorem 1.1. There is a...8217"..< tn <"" tN -= IF be a sequence of discrete time levels, set We note that by (I. 16d) the quantity I/ = (tni, t.n+1 ), k1 = t+ I - t.n and E(h,U,f
NASA Astrophysics Data System (ADS)
Mathevet, T.; Joel, G.; Gottardi, F.; Nemoz, B.
2017-12-01
The aim of this communication is to present analyses of climate variability and change on snow water equivalent (SWE) observations, reconstructions (1900-2016) and scenarii (2020-2100) of a hundred of snow courses dissiminated within the french Alps. This issue became particularly important since a decade, in regions where snow variability had a large impact on water resources availability, poor snow conditions in ski resorts and artificial snow production. As a water resources manager in french mountainuous regions, EDF (french hydropower company) has developed and managed a hydrometeorological network since 1950. A recent data rescue research allowed to digitize long term SWE manual measurments of a hundred of snow courses within the french Alps. EDF have been operating an automatic SWE sensors network, complementary to the snow course network. Based on numerous SWE observations time-series and snow accumulation and melt model (Garavaglia et al., 2017), continuous daily historical SWE time-series have been reconstructed within the 1950-2016 period. These reconstructions have been extented to 1900 using 20 CR reanalyses (ANATEM method, Kuentz et al., 2015) and up to 2100 using GIEC Climate Change scenarii. Considering various mountainous areas within the french Alps, this communication focuses on : (1) long term (1900-2016) analyses of variability and trend of total precipitation, air temperature, snow water equivalent, snow line altitude, snow season length , (2) long term variability of hydrological regime of snow dominated watersheds and (3) future trends (2020 -2100) using GIEC Climate Change scenarii. Comparing historical period (1950-1984) to recent period (1984-2016), quantitative results within a region in the north Alps (Maurienne) shows an increase of air temperature by 1.2 °C, an increase of snow line height by 200m, a reduction of SWE by 200 mm/year and a reduction of snow season length by 15 days. These analyses will be extended from north to south of the Alps, on a region spanning 200 km. Caracterisation of the increase of snow line height and SWE reduction are particularly important at a local and watershed scale. This long term change of snow dynamics within moutainuous regions both impacts snow resorts and artificial snow production developments and multi-purposes dam reservoirs managments.
NASA Astrophysics Data System (ADS)
Allen, Steve
2000-10-01
Dans cette these nous presentons une nouvelle methode non perturbative pour le calcul des proprietes d'un systeme de fermions. Notre methode generalise l'approximation auto-coherente a deux particules proposee par Vilk et Tremblay pour le modele de Hubbard repulsif. Notre methode peut s'appliquer a l'etude du comportement pre-critique lorsque la symetrie du parametre d'ordre est suffisamment elevee. Nous appliquons la methode au probleme du pseudogap dans le modele de Hubbard attractif. Nos resultats montrent un excellent accord avec les donnees Monte Carlo pour de petits systemes. Nous observons que le regime ou apparait le pseudogap dans le poids spectral a une particule est un regime classique renormalise caracterise par une frequence caracteristique des fluctuations supraconductrices inferieure a la temperature. Une autre caracteristique est la faible densite de superfluide de cette phase demontrant que nous ne sommes pas en presence de paires preformees. Les resultats obtenus semblent montrer que la haute symetrie du parametre d'ordre et la bidimensionalite du systeme etudie elargissent le domaine de temperature pour lequel le regime pseudogap est observe. Nous argumentons que ce resultat est transposable aux supraconducteurs a haute temperature critique ou le pseudogap apparait a des' temperatures beaucoup plus grandes que la temperature critique. La forte symetrie dans ces systemes pourraient etre reliee a la theorie SO(5) de Zhang. En annexe, nous demontrons un resultat tout recent qui permettrait d'assurer l'auto-coherence entre les proprietes a une et a deux particules par l'ajout d'une dynamique au vertex irreductible. Cet ajout laisse entrevoir la possibilite d'etendre la methode au cas d'une forte interaction.
Caracterisation of Titanium Nitride Layers Deposited by Reactive Plasma Spraying
NASA Astrophysics Data System (ADS)
Roşu, Radu Alexandru; Şerban, Viorel-Aurel; Bucur, Alexandra Ioana; Popescu, Mihaela; Uţu, Dragoş
2011-01-01
Forming and cutting tools are subjected to the intense wear solicitations. Usually, they are either subject to superficial heat treatments or are covered with various materials with high mechanical properties. In recent years, thermal spraying is used increasingly in engineering area because of the large range of materials that can be used for the coatings. Titanium nitride is a ceramic material with high hardness which is used to cover the cutting tools increasing their lifetime. The paper presents the results obtained after deposition of titanium nitride layers by reactive plasma spraying (RPS). As deposition material was used titanium powder and as substratum was used titanium alloy (Ti6Al4V). Macroscopic and microscopic (scanning electron microscopy) images of the deposited layers and the X ray diffraction of the coatings are presented. Demonstration program with layers deposited with thickness between 68,5 and 81,4 μm has been achieved and presented.
Croissance epitaxiale de GaAs sur substrats de Ge par epitaxie par faisceaux chimiques
NASA Astrophysics Data System (ADS)
Belanger, Simon
La situation energetique et les enjeux environnementaux auxquels la societe est confrontee entrainent un interet grandissant pour la production d'electricite a partir de l'energie solaire. Parmi les technologies actuellement disponibles, la filiere du photovoltaique a concentrateur solaire (CPV pour concentrator photovoltaics) possede un rendement superieur et mi potentiel interessant a condition que ses couts de production soient competitifs. La methode d'epitaxie par faisceaux chimiques (CBE pour chemical beam epitaxy) possede plusieurs caracteristiques qui la rendent interessante pour la production a grande echelle de cellules photovoltaiques a jonctions multiples a base de semi-conducteurs III-V. Ce type de cellule possede la meilleure efficacite atteinte a ce jour et est utilise sur les satellites et les systemes photovoltaiques a concentrateur solaire (CPV) les plus efficaces. Une des principales forces de la technique CBE se trouve dans son potentiel d'efficacite d'utilisation des materiaux source qui est superieur a celui de la technique d'epitaxie qui est couramment utilisee pour la production a grande echelle de ces cellules. Ce memoire de maitrise presente les travaux effectues dans le but d'evaluer le potentiel de la technique CBE pour realiser la croissance de couches de GaAs sur des substrats de Ge. Cette croissance constitue la premiere etape de fabrication de nombreux modeles de cellules solaires a haute performance decrites plus haut. La realisation de ce projet a necessite le developpement d'un procede de preparation de surface pour les substrats de germanium, la realisation de nombreuses sceances de croissance epitaxiale et la caracterisation des materiaux obtenus par microscopie optique, microscopie a force atomique (AFM), diffraction des rayons-X a haute resolution (HRXRD), microscopie electronique a transmission (TEM), photoluminescence a basse temperature (LTPL) et spectrometrie de masse des ions secondaires (SIMS). Les experiences ont permis de confirmer l'efficacite du procede de preparation de surface et d'identifier les conditions de croissance optimales. Les resultats de caracterisation indiquent que les materiaux obtenus presentent une tres faible rugosite de surface, une bonne qualite cristalline et un dopage residuel relativement important. De plus, l'interface GaAs/Ge possede une faible densite de defauts. Finalement, la diffusion d'arsenic dans le substrat de germanium est comparable aux valeurs trouvees dans la litterature pour la croissance a basse temperature avec les autres procedes d'epitaxie courants. Ces resultats confirment que la technique d'epitaxie par faisceaux chimiques (CBE) permet de produire des couches de GaAs sur Ge de qualite adequate pour la fabrication de cellules solaires a haute performance. L'apport a la communaute scientifique a ete maximise par le biais de la redaction d'un article soumis a la revue Journal of Crystal Growth et la presentation des travaux a la conference Photovoltaics Canada 2010 . Mots-cles : Epitaxie par jets chimiques, Chemical beam epitaxy, CBE, MOMBE, Germanium, GaAs, Ge
NASA Astrophysics Data System (ADS)
Aubree, Nathan
Since 1990, constitutive concrete model EPM3D (Multiaxial Progressive Damage in 3 Dimensions) has been developed at Polytechnique Montreal. Bouzaiene and Massicotte (1995) choose the hypoelastic approach with the concept of equivalent deformation and the implementation of a scalar damage parameter to represent the microcracking of concrete in pre-peak compression. The post-peak softening behaviour, in tension and in compression, is based on the concept of conservation of the fracture energy. In the finite elements context, it requires defining a localisation limiter acting on the softening modulus depending on the element size. The formulation of EPM3D model in the case of the post-peak compression required revisions. Mesh-dependence problems and the absence of the consideration of the confinement effect were the most important points to improve, with as main goal the modelling of the fracture of the reinforced concrete columns. With a complete literature review, we try to establish an exhaustive list of the numerous parameters having an influence on the softening behavior under uniaxial and multiaxial loads. In the second part of this review, we exhibit the difficulties of modelling a softening material with finite elements theory and the principle of the set up localization limiter. Inspired by models we met in literature, modifications of the previously established relation are proposed by focusing on a more adequate representation of the behavior under confinement loads. Then we proceed to the validation of the model by means of simple analyses with the software ABAQUS and the module of explicit dynamic resolution, called Explicit. Also we present its specificities compared with a classic implicit static resolution. We supply some advice to the reader and future students who are susceptible to model real reinforced concrete columns with EPM3D. Finally we made an experimental program to characterize the post-peak behavior in uniaxial compression of a fiber reinforced concrete mixture (FRC) with the aim of considering the possibility or not of an extrapolation of our model for FRC.
MICROROC: MICRO-mesh gaseous structure Read-Out Chip
NASA Astrophysics Data System (ADS)
Adloff, C.; Blaha, J.; Chefdeville, M.; Dalmaz, A.; Drancourt, C.; Dulucq, F.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Jacquemier, J.; Karyotakis, Y.; Martin-Chassard, G.; Prast, J.; Seguin-Moreau, N.; de La Taille, Ch; Vouters, G.
2012-01-01
MICRO MEsh GAseous Structure (MICROMEGAS) and Gas Electron Multipliers (GEM) detectors are two candidates for the active medium of a Digital Hadronic CALorimeter (DHCAL) as part of a high energy physics experiment at a future linear collider (ILC/CLIC). Physics requirements lead to a highly granular hadronic calorimeter with up to thirty million channels with probably only hit information (digital readout calorimeter). To validate the concept of digital hadronic calorimetry with such small cell size, the construction and test of a cubic meter technological prototype, made of 40 planes of one square meter each, is necessary. This technological prototype would contain about 400 000 electronic channels, thus requiring the development of front-end ASIC. Based on the experience gained with previous ASIC that were mounted on detectors and tested in particle beams, a new ASIC called MICROROC has been developped. This paper summarizes the caracterisation campaign that was conducted on this new chip as well as its integration into a large area Micromegas chamber of one square meter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadene, M.
Thin films of Cd sub(1-y)Zn sub y S (0 < y < 0.2) have been prepared either by thermal evaporation of the powdered solids from a single crucible, or by rapid evaporation from two crucibles. Different methods were used to characterise the films according to their structural, electrical and electron-optical properties as a function of the amount of Zn in the film. Both liquid-phase and solid-phase ion exchange processes have been used to deposit a thin film of Cu/sub 2/S on the Cd sub(1-y)Zn sub y S film to produce a p-n hetero-junction. A study of the growth of themore » Cd/sub 2/S layer has been carried out. Photocurrents and voltages have been determined for these Cu/sub 2/S-CdZnS cells.« less
Towards a complete caracterisation of Ganymede's environnement
NASA Astrophysics Data System (ADS)
Cessateur, Gaël; Barthélémy, Mathieu; Lilensten, Jean; Dudok de Wit, Thierry; Kretzschmar, Matthieu; Mbemba Kabuiku, Lydie
2013-04-01
In the framework to the JUICE mission to the Jovian system, a complete picture of the interaction between Ganymede's atmosphere and external forcing is needed. This will definitely allow us to constrain instrument performances according to the mission objectives. The main source of information regarding the upper atmosphere is the non LTE UV-Visible-near IR emissions. Those emissions are both induce by the incident solar UV flux and particle precipitations. This work aims at characterizing the impact from those external forcing, and then at deriving some key physical parameters that are measurable by an orbiter, namely the oxygen red line at 630 nm or the resonant oxygen line at 130 nm for example. We will also present the 4S4J instrument, a proposed EUV radiometer, which will provides the solar local EUV flux, an invaluable parameter for the JUICE mission. Based on new technologies and a new design, only two passbands are considered for reconstructing the whole EUV spectrum.
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast
Pang, Wei; Coghill, George M.
2015-01-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.
Pang, Wei; Coghill, George M
2015-05-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Blanchard, Yann
An important goal, within the context of improving climate change modelling, is to enhance our understanding of aerosols and their radiative effects (notably their indirect impact as cloud condensation nuclei). The cloud optical depth (COD) and average ice particle size of thin ice clouds (TICs) are two key parameters whose variations could strongly influence radiative effects and climate in the Arctic environment. Our objective was to assess the potential of using multi-band thermal radiance measurements of zenith sky radiance for retrieving COD and effective particle diameter (Deff) of TICs in the Arctic. We analyzed and quantified the sensitivity of thermal radiance on many parameters, such as COD, Deff, water vapor content, cloud bottom altitude and thickness, size distribution and shape. Using the sensitivity of IRT to COD and Deff, the developed retrieval technique is validated in comparison with retrievals from LIDAR and RADAR. Retrievals were applied to ground-based thermal infrared data acquired for 100 TICs at the high-Arctic PEARL observatory in Eureka, Nunavut, Canada and were validated using AHSRL LIDAR and MMCR RADAR data. The results of the retrieval method were used to successfully extract COD up to values of 3 and to separate TICs into two types : TIC1 characterized by small crystals (Deff < 30 mum) and TIC2 by large ice crystals (Deff > 30 mum, up to 300 mum). Inversions were performed across two polar winters. At the end of this research, we proposed different alternatives to apply our methodology in the Arctic. Keywords : Remote sensing ; ice clouds ; thermal infrared multi-band radiometry ; Arctic.
Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser
NASA Astrophysics Data System (ADS)
Maamri, Ilyes
Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux nouvelles approches basees sur la caracterisation geometrique du profil de durete ont ete considerees. Contrairement aux premiers modeles predisant le profil de durete en fonction des parametres du procede, les nouveaux modeles combinent les memes parametres avec les attributs geometriques du profil de durete pour refleter la qualite du traitement. Les modeles obtenus montrent que cette strategie conduit a des resultats tres prometteurs.
NASA Astrophysics Data System (ADS)
Saghir, Hassane
Aircraft systems are interconnected by cable bundles that may represent a hundred kilometres. Those wirings penalize the aircraft weight. Cable bundles favour electromagnetic interference on board aircraft and routing a new cable for integrating new equipment boxes in a sustained aircraft requires a lot of retrofit work. Consequently, the aviation industry and aerospace community are working in the scope of different projects on new alternatives that will better fit to the future generation of aircrafts and help to reduce interconnecting wires on board. Wireless technologies represent a coveted solution that could make significant improvements and benefits to new generations of aircrafts. This research work focuses on the study of the wireless propagation over some frequency bands inside commercial aircrafts. The main objective is to provide conclusions and recommendations on criteria that may help optimizing the wireless communication without impacting the existent systems. Targeted applications are the inflight entertainment (IFE) service and wireless sensing systems. This work was conducted in collaboration with Bombardier-Aerospace based in Montreal (QC) in the frame of AVIO-402 project under the grant of CRIAQ (http://www.criaq.aero/). In this study, an experimental characterization of the propagation channel in the ISM band around 2.4 GHz frequency 5.8 GHz has been performed in a CRJ700 aircraft from Bombardier Aerospace. This characterization allowed to extract the parameters needed to analyze the channel behavior. The measurements results have shown that the propagation characteristics are close to those of both typical indoor medium in terms of the delay spread and a tunnel in terms of path loss. Then, a 3D channel modeling and simulation have been achieved with an RF prediction software (Wireless Insite Remcom). The simulations also consider the millimeter band around 60 GHz. The simulations yielded to analytical models of radio coverage which were subsequently used to evaluate wireless link interference scenarios and performance metrics. Finally, these models were used to design a TDL (Tapped Delay Line) channel model with the goal of an implementation under Matlab in a wireless transmission chain.
NASA Astrophysics Data System (ADS)
Demers, Vincent
L'objectif de ce projet est de determiner les conditions de laminage et la temperature de traitement thermique maximisant les proprietes fonctionnelles de l'alliage a memoire de forme Ti-Ni. Les specimens sont caracterises par des mesures de calorimetrie, de microscopie optique, de gene ration de contrainte, de deformation recuperable et des essais mecaniques. Pour un cycle unique, l'utilisation d'un taux d'ecrouissage e=1.5 obtenu avec l'application d'une force de tension FT = 0.1sigma y et d'une huile minerale resulte en un echantillon droit, sans microfissure et qui apres un recuit a 400°C, produit un materiau nanostructure manifestant des proprietes fonctionnelles deux fois plus grandes que le meme materiau ayant une structure polygonisee. Pour des cycles repetes, les memes conditions de laminage sont valables mais le niveau de deformation optimal est situe entre e=0.75-2, et depend particulierement du mode de sollicitation, du niveau de stabilisation et du nombre de cycles a la rupture requis par l'application.
NASA Astrophysics Data System (ADS)
Danouj, Boujemaa
An important issue affecting the sustainability of power transformers is systematic and progressive deterioration of the insulation system by the action of partial discharge. Ideally, it is appropriate to use on line, non-destructive techniques for detection and diagnosis of failures related to insulation systems, in order to determine whether preventive maintenance action is required. Thus, huge material losses can be saved (spared), while improving reliability and system availability. Based on a new generation of piezoelectric sensors (High Temperature Ultrasonic Transducers HTUTs), recently developed by the Industrial Materials Institute (IMI) in Boucherville (Qc, Canada) and offers very interesting features (broad band frequency response, flexible, miniature, economic, etc..), we propose in this thesis an investigation on the applicability of this technology to the problematic of partial discharges. This work presents an analysis of the metrological performance of these sensors and demonstrated empirically the consistency of their measures. It outlines the results of validation from a comparative study with the measures of a standard detection circuit. In addition, it also presents the potential of these sensors to locate partial discharge source position by acoustic emission.
NASA Astrophysics Data System (ADS)
Amouriq, Yves; Guedon, Jeanpierre; Normand, Nicolas; Arlicot, Aurore; Benhdech, Yassine; Weiss, Pierre
2011-03-01
Bone microarchitecture is the predictor of bone quality or bone disease. It can only be measured on a bone biopsy, which is invasive and not available for all clinical situations. Texture analysis on radiographs is a common way to investigate bone microarchitecture. But relationship between three-dimension histomorphometric parameters and two-dimension texture parameters is not always well known, with poor results. The aim of this study is to performed angulated radiographs of the same region of interest and see if a better relationship between texture analysis on several radiographs and histomorphometric parameters can be developed. Computed radiography images of dog (Beagle) mandible section in molar regions were compared with high-resolution micro-CT (Computed-Tomograph) volumes. Four radiographs with 27° angle (up, down, left, right, using Rinn ring and customized arm positioning system) were performed from initial radiograph position. Bone texture parameters were calculated on all images. Texture parameters were also computed from new images obtained by difference between angulated images. Results of fractal values in different trabecular areas give some caracterisation of bone microarchitecture.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
Using Qualitative Hazard Analysis to Guide Quantitative Safety Analysis
NASA Technical Reports Server (NTRS)
Shortle, J. F.; Allocco, M.
2005-01-01
Quantitative methods can be beneficial in many types of safety investigations. However, there are many difficulties in using quantitative m ethods. Far example, there may be little relevant data available. This paper proposes a framework for using quantitative hazard analysis to prioritize hazard scenarios most suitable for quantitative mziysis. The framework first categorizes hazard scenarios by severity and likelihood. We then propose another metric "modeling difficulty" that desc ribes the complexity in modeling a given hazard scenario quantitatively. The combined metrics of severity, likelihood, and modeling difficu lty help to prioritize hazard scenarios for which quantitative analys is should be applied. We have applied this methodology to proposed concepts of operations for reduced wake separation for airplane operatio ns at closely spaced parallel runways.
The mathematics of cancer: integrating quantitative models.
Altrock, Philipp M; Liu, Lin L; Michor, Franziska
2015-12-01
Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Tiwari, Anjani K; Ojha, Himanshu; Kaul, Ankur; Dutta, Anupama; Srivastava, Pooja; Shukla, Gauri; Srivastava, Rakesh; Mishra, Anil K
2009-07-01
Nuclear magnetic resonance imaging is a very useful tool in modern medical diagnostics, especially when gadolinium (III)-based contrast agents are administered to the patient with the aim of increasing the image contrast between normal and diseased tissues. With the use of soft modelling techniques such as quantitative structure-activity relationship/quantitative structure-property relationship after a suitable description of their molecular structure, we have studied a series of phosphonic acid for designing new MRI contrast agent. Quantitative structure-property relationship studies with multiple linear regression analysis were applied to find correlation between different calculated molecular descriptors of the phosphonic acid-based chelating agent and their stability constants. The final quantitative structure-property relationship mathematical models were found as--quantitative structure-property relationship Model for phosphonic acid series (Model 1)--log K(ML) = {5.00243(+/-0.7102)}- MR {0.0263(+/-0.540)}n = 12 l r l = 0.942 s = 0.183 F = 99.165 quantitative structure-property relationship Model for phosphonic acid series (Model 2)--log K(ML) = {5.06280(+/-0.3418)}- MR {0.0252(+/- .198)}n = 12 l r l = 0.956 s = 0.186 F = 99.256.
NASA Astrophysics Data System (ADS)
Tutashkonko, Sergii
Le sujet de cette these porte sur l'elaboration du nouveau nanomateriau par la gravure electrochimique bipolaire (BEE) --- le Ge mesoporeux et sur l'analyse de ses proprietes physico-chimiques en vue de son utilisation dans des applications photovoltaiques. La formation du Ge mesoporeux par gravure electrochimique a ete precedemment rapportee dans la litterature. Cependant, le verrou technologique important des procedes de fabrication existants consistait a obtenir des couches epaisses (superieure a 500 nm) du Ge mesoporeux a la morphologie parfaitement controlee. En effet, la caracterisation physico-chimique des couches minces est beaucoup plus compliquee et le nombre de leurs applications possibles est fortement limite. Nous avons developpe un modele electrochimique qui decrit les mecanismes principaux de formation des pores ce qui nous a permis de realiser des structures epaisses du Ge mesoporeux (jusqu'au 10 mum) ayant la porosite ajustable dans une large gamme de 15% a 60%. En plus, la formation des nanostructures poreuses aux morphologies variables et bien controlees est desormais devenue possible. Enfin, la maitrise de tous ces parametres a ouvert la voie extremement prometteuse vers la realisation des structures poreuses a multi-couches a base de Ge pour des nombreuses applications innovantes et multidisciplinaires grace a la flexibilite technologique actuelle atteinte. En particulier, dans le cadre de cette these, les couches du Ge mesoporeux ont ete optimisees dans le but de realiser le procede de transfert de couches minces d'une cellule solaire a triple jonctions via une couche sacrificielle en Ge poreux. Mots-cles : Germanium meso-poreux, Gravure electrochimique bipolaire, Electrochimie des semi-conducteurs, Report des couches minces, Cellule photovoltaique
NASA Astrophysics Data System (ADS)
El Mansouri, Souleimane
Dans le domaine viscoelastique lineaire (VEL, domaine des petites deformations), le comportement thermomecanique du bitume et du mastic bitumineux (melange uniforme de bitume et de fillers) a ete caracterise au Laboratoire des Chaussees et Materiaux Bitumineux (LCMB) de l'Ecole de technologie superieure (ETS) avec l'appui de nos partenaires externes : la Societe des Alcools du Quebec (SAQ) et Eco Entreprises Quebec (EEQ). Les proprietes rheologiques des bitumes et des mastics ont ete mesurees grâce a un nouvel outil d'investigation appele, Rheometre a Cisaillement Annulaire (RCA), sous differentes conditions de chargement. Cet appareil permet non seulement de solliciter des eprouvettes de tailles importantes par rapport a celles utilisees lors des essais classiques, mais aussi d'effectuer des essais en conditions quasi-homogenes, ce qui permet de donner acces a la loi de comportement des materiaux. Les essais sont realises sur une large gamme de temperatures et de frequences (de -15 °C a 45°C et de 0,03Hz a 10 Hz). Cette etude a ete menee principalement pour comparer le comportement d'un bitume avec celui d'un mastic bitumineux dans le domaine des petites deformations. neanmoins, dans une seconde perspective, on s'interesse a l'influence des fillers de verre de post-consommation sur le comportement d'un mastic a faibles niveaux de deformations en comparant l'evolution des modules complexes de cisaillements (G*) d'un mastic avec fillers de verre et un mastic avec fillers conventionnels (calcaire). Enfin, le modele analogique 2S2P1D est utilise pour simuler le comportement viscoelastique lineaire des bitumes et des mastics bitumineux testes lors de la campagne experimentale.
An Assessment of the Quantitative Literacy of Undergraduate Students
ERIC Educational Resources Information Center
Wilkins, Jesse L. M.
2016-01-01
Quantitative literacy (QLT) represents an underlying higher-order construct that accounts for a person's willingness to engage in quantitative situations in everyday life. The purpose of this study is to retest the construct validity of a model of quantitative literacy (Wilkins, 2010). In this model, QLT represents a second-order factor that…
An overview of quantitative approaches in Gestalt perception.
Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H
2016-09-01
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
Mesure de haute resolution de la fonction de distribution radiale du silicium amorphe pur
NASA Astrophysics Data System (ADS)
Laaziri, Khalid
1999-11-01
Cette these porte sur l'etude de la structure du silicium amorphe prepare par irradiation ionique. Elle presente des mesures de diffraction de rayons X sur de la poudre de silicium cristallin, du silicium amorphe relaxe et non relaxe, ainsi que tous les developpements mathematiques et physiques necessaires pour extraire la fonction de distribution radiale correspondant a chaque echantillon. Au Chapitre I, nous presentons une methode de fabrication de membranes minces de silicium amorphe pur. Il y a deux etapes majeures lors du processus de fabrication: l'implantation ionique, afin de creer une couche amorphe de plusieurs microns et l'attaque chimique, pour enlever le reste du materiau cristallin. Nous avons caracterise premierement les membranes de silicium amorphe par spectroscopie Raman pour verifier qu'il ne reste plus de trace de materiau cristallin dans les films amorphes. Une deuxieme caracterisation par detection de recul elastique (ERD-TOF) sur ces memes membranes a montre qu'il y a moins de 0.1% atomique de contaminants tels que l'oxygene, le carbone, et l'hydrogene. Au Chapitre II, nous proposons une nouvelle methode de correction de la contribution inelastique "Compton" des spectres de diffusion totale afin d'extraire les pics de diffusion elastique, responsable de la diffraction de Bragg. L'article presente tout d'abord une description simplifiee d'une theorie sur la diffusion inelastique dite "Impulse Approximation" (IA) qui permet de calculer des profils de Compton en fonction de l'energie et de l'angle de diffusion 2theta. Ces profils sont utilises comme fonction de lissage de la diffusion Compton experimentale. Pour lisser les pics de diffusion elastique, nous avons utilise une fonction pic de nature asymetrique. Aux Chapitre III, nous exposons de maniere detaillee les resultats des experiences de diffraction de rayons X sur les membranes de silicium amorphe et la poudre de silicium cristallin que nous avons preparees. Nous abordons aussi les differentes etapes experimentales, d'analyse ainsi que les methodes de determination et de filtrage des transformees de Fourier des donnees de diffraction. Une comparaison des fonctions de distribution radiale du silicium amorphe relaxe et non relaxe indique que la relaxation structurelle dans le silicium amorphe est probablement due en grande partie a une annihilation des defauts plutot qu'a une reorganisation atomique globale du reseau de silicium amorphe. La deduction de la coordination des pics correspondants au premiers voisins atomiques par lissage de fonctions gaussienne indique que la coordination du silicium amorphe relaxe est de 3.88, celle du non-relaxe est de 3.79, alors que la mesure de reference sur la poudre de silicium cristallin donne une valeur de 4 tel que prevu. La sous-coordination du silicium amorphe expliquerait pourquoi sa densite est inferieure a celle du silicium cristallin. (Abstract shortened by UMI.)
75 FR 79370 - Official Release of the MOVES2010a and EMFAC2007 Motor Vehicle Emissions Models for...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
...: This Notice announces the availability of two new EPA guidance documents for: completing quantitative... of the MOVES model (MOVES2010a) for official use for quantitative CO, PM 2.5, and PM 10 hot-spot... emissions model is required to be used in quantitative CO and PM hot-spot analyses for project-level...
Hadfield, J D; Nakagawa, S
2010-03-01
Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.
What Are We Doing When We Translate from Quantitative Models?
Critchfield, Thomas S; Reed, Derek D
2009-01-01
Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533
Quantitative Predictive Models for Systemic Toxicity (SOT)
Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S
2016-06-01
Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS maps for cancer detection showed improved visualization of cancer location and extent. Conclusion Quantitative multiparametric MR imaging models developed by using coregistered correlative histopathologic data yielded a voxel-wise CBS that outperformed single quantitative MR imaging parameters for detection of prostate cancer, especially when the models were assessed at the individual level. (©) RSNA, 2016 Online supplemental material is available for this article.
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mayes, R.; Lyford, M. E.; Myers, J. D.
2009-12-01
The Quantitative Reasoning in STEM (QR STEM) project is a state level Mathematics and Science Partnership Project (MSP) with a focus on the mathematics and statistics that underlies the understanding of complex global scientific issues. This session is a companion session to the QR STEM: The Science presentation. The focus of this session is the quantitative reasoning aspects of the project. As students move from understandings that range from local to global in perspective on issues of energy and environment, there is a significant increase in the need for mathematical and statistical conceptual understanding. These understandings must be accessible to the students within the scientific context, requiring the special understandings that are endemic within quantitative reasoning. The QR STEM project brings together interdisciplinary teams of higher education faculty and middle/high school teachers to explore complex problems in energy and environment. The disciplines include life sciences, physics, chemistry, earth science, statistics, and mathematics. These interdisciplinary teams develop open ended performance tasks to implement in the classroom, based on scientific concepts that underpin energy and environment. Quantitative reasoning is broken down into three components: Quantitative Literacy, Quantitative Interpretation, and Quantitative Modeling. Quantitative Literacy is composed of arithmetic concepts such as proportional reasoning, numeracy, and descriptive statistics. Quantitative Interpretation includes algebraic and geometric concepts that underlie the ability to interpret a model of natural phenomena which is provided for the student. This model may be a table, graph, or equation from which the student is to make predictions or identify trends, or from which they would use statistics to explore correlations or patterns in data. Quantitative modeling is the ability to develop the model from data, including the ability to test hypothesis using statistical procedures. We use the term model very broadly, so it includes visual models such as box models, as well as best fit equation models and hypothesis testing. One of the powerful outcomes of the project is the conversation which takes place between science teachers and mathematics teachers. First they realize that though they are teaching concepts that cross their disciplines, the barrier of scientific language within their subjects restricts students from applying the concepts across subjects. Second the mathematics teachers discover the context of science as a means of providing real world situations that engage students in the utility of mathematics as a tool for solving problems. Third the science teachers discover the barrier to understanding science that is presented by poor quantitative reasoning ability. Finally the students are engaged in exploring energy and environment in a manner which exposes the importance of seeing a problem from multiple interdisciplinary perspectives. The outcome is a democratic citizen capable of making informed decisions, and perhaps a future scientist.
Code of Federal Regulations, 2013 CFR
2013-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2012 CFR
2012-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2014 CFR
2014-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
Quantitative Reasoning in Environmental Science: A Learning Progression
ERIC Educational Resources Information Center
Mayes, Robert Lee; Forrester, Jennifer Harris; Christus, Jennifer Schuttlefield; Peterson, Franziska Isabel; Bonilla, Rachel; Yestness, Nissa
2014-01-01
The ability of middle and high school students to reason quantitatively within the context of environmental science was investigated. A quantitative reasoning (QR) learning progression was created with three progress variables: quantification act, quantitative interpretation, and quantitative modeling. An iterative research design was used as it…
Mise en oeuvre et caracterisation d'une methode d'injection de pannes a haut niveau d'abstraction
NASA Astrophysics Data System (ADS)
Robache, Remi
Nowadays, the effects of cosmic rays on electronics are well known. Different studies have demonstrated that neutrons are the main cause of non-destructive errors in embedded circuits on airplanes. Moreover, the reduction of transistor sizes is making all circuits more sensitive to those effects. Radiation tolerant circuits are sometimes used in order to improve the robustness of circuits. However, those circuits are expensive and their technologies often lag a few generations behind compared to non-tolerant circuits. Designers prefer to use conventional circuits with mitigation techniques to improve the tolerance to soft errors. It is necessary to analyse and verify the dependability of a circuit throughout its design process. Conventional design methodologies need to be adapted in order to evaluate the tolerance to non-destructive errors caused by radiations. Nowadays, designers need new tools and new methodologies to validate their mitigation strategies if they are to meet system requirements. In this thesis, we are proposing a new methodology allowing to capture the faulty behavior of a circuit at a low level of abstraction and to apply it at a higher level. In order to do that, we are introducing the new concept of faulty behavior Signatures that allows creating, at a high level of abstraction (system level) models that reflect with high fidelity the faulty behavior of a circuit learned at a low level of abstraction, at gate level. We successfully replicated the faulty behavior of an 8 bit adder and multiplier with Simulink, with respectively a correlation coefficient of 98.53% and 99.86%. We are proposing a methodology that permits to generate a library of faulty components, with Simulink, allowing designers to verify the dependability of their models early in the design flow. We are presenting and analyzing our results obtained for three different circuits throughout this thesis. Within the framework of this project a paper was published at the NEWCAS 2013 conference (Robache et al., 2013). This works presents the new concept of faulty behavior Signature, the methodology for generating Signatures we developed and also our experiments with an 8bit multiplier.
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
NASA Astrophysics Data System (ADS)
Mebarki, Fouzia
The aim of this study is to examine the possibility of using thermoplastic composite materials for electrical applications such as supports of automotive engine ignition systems. We are particularly interested in composites based on recycled polyethylene terephtalate (PET). Conventional isolations like PET cannot meet the new prescriptive requirements. The introduction of reinforcement materials, such as glass fibers and mica can improve the mechanical characteristics of these materials. However, this enhancement may also reduce electrical properties especially since these composites have to be used under severe thermal and electric stresses. In order to estimate PET composite insulation lifetimes, accelerated aging tests were carried out at temperatures ranging from room temperature to 140°C and at a frequency of 300Hz. Studies at high temperature will help to identify the service temperature of candidate materials. Dielectric breakdown tests have been made on a large number of samples according to the standard of dielectric strength tests of solid insulating ASTM D-149. These tests have to identify the problematic samples and to check solid insulation quality. The different knowledge gained from this analysis was used to predict material performance. This will give the company the possibility to improve existing formulations and subsequently develop a material having electrical and thermal properties suitable for this application.
Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure
Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Modeling with Young Students--Quantitative and Qualitative.
ERIC Educational Resources Information Center
Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis
1999-01-01
A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…
Ultrasound hepatic/renal ratio and hepatic attenuation rate for quantifying liver fat content.
Zhang, Bo; Ding, Fang; Chen, Tian; Xia, Liang-Hua; Qian, Juan; Lv, Guo-Yi
2014-12-21
To establish and validate a simple quantitative assessment method for nonalcoholic fatty liver disease (NAFLD) based on a combination of the ultrasound hepatic/renal ratio and hepatic attenuation rate. A total of 170 subjects were enrolled in this study. All subjects were examined by ultrasound and (1)H-magnetic resonance spectroscopy ((1)H-MRS) on the same day. The ultrasound hepatic/renal echo-intensity ratio and ultrasound hepatic echo-intensity attenuation rate were obtained from ordinary ultrasound images using the MATLAB program. Correlation analysis revealed that the ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate were significantly correlated with (1)H-MRS liver fat content (ultrasound hepatic/renal ratio: r = 0.952, P = 0.000; hepatic echo-intensity attenuation r = 0.850, P = 0.000). The equation for predicting liver fat content by ultrasound (quantitative ultrasound model) is: liver fat content (%) = 61.519 × ultrasound hepatic/renal ratio + 167.701 × hepatic echo-intensity attenuation rate -26.736. Spearman correlation analysis revealed that the liver fat content ratio of the quantitative ultrasound model was positively correlated with serum alanine aminotransferase, aspartate aminotransferase, and triglyceride, but negatively correlated with high density lipoprotein cholesterol. Receiver operating characteristic curve analysis revealed that the optimal point for diagnosing fatty liver was 9.15% in the quantitative ultrasound model. Furthermore, in the quantitative ultrasound model, fatty liver diagnostic sensitivity and specificity were 94.7% and 100.0%, respectively, showing that the quantitative ultrasound model was better than conventional ultrasound methods or the combined ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate. If the (1)H-MRS liver fat content had a value < 15%, the sensitivity and specificity of the ultrasound quantitative model would be 81.4% and 100%, which still shows that using the model is better than the other methods. The quantitative ultrasound model is a simple, low-cost, and sensitive tool that can accurately assess hepatic fat content in clinical practice. It provides an easy and effective parameter for the early diagnosis of mild hepatic steatosis and evaluation of the efficacy of NAFLD treatment.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
NASA Astrophysics Data System (ADS)
Fournier, Patrick
Le Modele de l'Etat Critique Generalise (MECG) est utilise pour decrire les proprietes magnetiques et de transport du YBa_2Cu_3O _7 polycristallin. Ce modele empirique permet de relier la densite de courant critique a la densite de lignes de flux penetrant dans la region intergrain. Deux techniques de mesures sont utilisees pour caracteriser nos materiaux. La premiere consiste a mesurer le champ au centre d'un cylindre creux en fonction du champ magnetique applique pour des temperatures comprises entre 20 et 85K. En variant l'epaisseur de la paroi du cylindre creux, il est possible de suivre l'evolution des cycles d'hysteresis et de determiner des champs caracteristiques qui varient en fonction de cette dimension. En utilisant un lissage des resultats experimentaux, nous determinons J _{co}, H_ {o} et n, les parametres du MECG. La forme des cylindres, avec une longueur comparable au diametre externe, entrai ne la presence d'un champ demagnetisant qui peut etre inclus dans le modele theorique. Ceci nous permet d'evaluer la fraction du volume ecrante, f _{g}, ainsi que le facteur demagnetisant N. Nous trouvons que J_{ co}, H_{o} et f_{g} dependent de la temperature, tandis que n et N (pour une epaisseur de paroi fixe) n'en dependent pas. La deuxieme technique consiste a mesurer le courant critique de lames minces en fonction du champ applique pour differentes temperatures. Nous utilisons un montage que nous avons developpe permettant d'effectuer ces mesures en contact direct avec le liquide refrigerant, i.e. dans l'azote liquide. Nous varions la temperature du liquide en variant la pression du gaz au-dessus du bain d'azote. Cette methode nous permet de balayer des temperatures entre 65K et la temperature critique du materiau ({~ }92K). Nous effectuons le lissage des courbes de courant critique en fonction du champ applique encore a l'aide du MECG, pour a nouveau obtenir ses parametres. Pour trois echantillons avec des traitements thermiques differents, les parametres sont differents confirmant que la variation des proprietes macroscopiques de ces supraconducteurs est intimement reliee a la nature des jonctions entre les grains et de la surface des grains. L'oxygenation prolongee retablit les proprietes initiales des echantillons qui se sont degrades durant le recuit des contacts.
NASA Astrophysics Data System (ADS)
Frikach, Kamal
2001-09-01
Dans ce travail je presente une etude de l'impedance de surface, ainsi que de l'attenuation et la variation de la vitesse ultrasonores dans les etats normal et supraconducteur sur les composes organiques k-(ET)2X (X = Cu(SCN) 2, Cu[N(CN)2]Br). A partir des mesures d'impedance de surface, les deux composantes sigma 1 et sigma2 de la conductivite complexe sont extraites en utilisant le modele de Drude. Ces mesures montrent que la symetrie du parametre d'ordre dans ces composes est differente de celle du cas BCS. Afin de comprendre le profil de sigma1 (T) nous avons etudie les fluctuations supraconductrices a partir de la paraconductivite sigma'( T). Cette etude est rendue possible grace a la structure quasi-2D des composes k-(ET)2X dans lesquelles les fluctuations supraconductrices sont fortes. Ces dernieres sont observees sur deux decades de temperatures dans le Cu(SCN)2. L'application du modele de Aslamazov-Larkin 2D et 3D montre la possibilite du passage du regime 2D a haute temperature au regime 3D au voisinage de Tc. En se basant sur ce resultat, nous avons calcule la paraconductivite en utilisant une approche a l'ordre d'une boucle a partir du modele de Lawrence-Doniach. En tenant compte de la correction par la self energie dans la limite dynamique (17 GHz), l'ajustement de la paraconductivite calculee est en bon accord avec les donnees experimentales. Le couplage interplan obtenu est compatible avec le caractere quasi-2D des composes organiques. Le temps de relaxation des quasi-particules dans l'etat supraconducteur est ensuite extrait pour la premiere fois dans ces composes dont le comportement en fonction de la temperature est compatible avec la presence des noeuds dans le gap. Dans l'etat normal, la variation de la vitesse ultrasonore presente un comportement anormal caracterise par un fort ramollissement a T = 38 K et 50 K dans k-(ET) 2Cu(SCN)2 et k-(ET)2Cu[N(CN) 2]Br respectivement dont l'amplitude est independante du champ magnetique jusqu'a H = Hc 2. Cette anomalie semble exister seulement dans les modes qui sondent le couplage interplan. Ce comportement est attribue au couplage entre les fluctuations antiferromagnetiques et les phonons acoustiques.
VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.
Little, Todd D; Wang, Eugene W; Gorrall, Britt K
2017-06-01
This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.
Li, Weiyong; Worosila, Gregory D
2005-05-13
This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.
NASA Astrophysics Data System (ADS)
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2016-07-01
Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.
Refining the quantitative pathway of the Pathways to Mathematics model.
Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda
2015-03-01
In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mijiyawa, Faycal
Cette etude permet d'adapter des materiaux composites thermoplastiques a fibres de bois aux engrenages, de fabriquer de nouvelles generations d'engrenages et de predire le comportement thermique de ces engrenages. Apres une large revue de la litterature sur les materiaux thermoplastiques (polyethylene et polypropylene) renforces par les fibres de bois (bouleau et tremble), sur la formulation et l'etude du comportement thermomecanique des engrenages en plastique-composite; une relation a ete etablie avec notre presente these de doctorat. En effet, beaucoup d'etudes sur la formulation et la caracterisation des materiaux composites a fibres de bois ont ete deja realisees, mais aucune ne s'est interessee a la fabrication des engrenages. Les differentes techniques de formulation tirees de la litterature ont facilite l'obtention d'un materiau composite ayant presque les memes proprietes que les materiaux plastiques (nylon, acetal...) utilises dans la conception des engrenages. La formulation des materiaux thermoplastiques renforces par les fibres de bois a ete effectuee au Centre de recherche en materiaux lignocellulosiques (CRML) de l'Universite du Quebec a Trois-Rivieres (UQTR), en collaboration avec le departement de Genie Mecanique, en melangeant les composites avec deux rouleaux sur une machine de type Thermotron-C.W. Brabender (modele T-303, Allemand) ; puis des pieces ont ete fabriquees par thermocompression. Les thermoplastiques utilises dans le cadre de cette these sont le polypropylene (PP) et le polyethylene haute densite (HDPE), avec comme renfort des fibres de bouleau et de tremble. A cause de l'incompatibilite entre la fibre de bois et le thermoplastique, un traitement chimique a l'aide d'un agent de couplage a ete realise pour augmenter les proprietes mecaniques des materiaux composites. Pour les composites polypropylene/bois : (1) Les modules elastiques et les contraintes a la rupture en traction des composites PP/bouleau et PP/tremble evoluent lineairement en fonction du taux de fibres, avec ou sans agent de couplage (Maleate de polypropylene MAPP). De plus, l'adherence entre les fibres de bois et le plastique est amelioree en utilisant seulement 3 % MAPP, entrainant donc une augmentation de la contrainte maximale bien qu'aucun effet significatif ne soit observe sur le module d'elasticite. (2) Les resultats obtenus montrent que, en general, les proprietes en traction des composites polypropylene/bouleau, polypropylene/tremble et polypropylene/bouleau/ tremble sont tres semblables. Les composites plastique-bois (WPCs), en particulier ceux contenant 30 % et 40 % de fibres, ont des modules elastiques plus eleves que certains plastiques utilises dans l'application des engrenages (ex. Nylon). Pour les composites polyethylene/bois, avec 3%Maleate de polyethylene (MAPE): (1) Tests de traction : le module elastique passe de 1.34 GPa a 4.19 GPa pour le composite HDPE/bouleau, alors qu'il passe de 1.34 GPa a 3.86 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 22 MPa a 42.65 MPa pour le composite HDPE/bouleau, alors qu'elle passe de 22 MPa a 43.48 MPa pour le composite HDPE/tremble. (2) Tests de flexion : le module elastique passe de 1.04 GPa a 3.47 GPa pour le composite HDPE/bouleau et a 3.64 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 23.90 MPa a 66.70 MPa pour le composite HDPE/bouleau, alors qu'elle passe a 59.51 MPa pour le composite HDPE/tremble. (3) Le coefficient de Poisson determine par impulsion acoustique est autour de 0.35 pour tous les composites HDPE/bois. (4) Le test de degradation thermique TGA nous revele que les materiaux composites presentent une stabilite thermique intermediaire entre les fibres de bois et la matrice HDPE. (5) Le test de mouillabilite (angle de contact) revele que l'ajout de fibres de bois ne diminue pas de facon significative les angles de contact avec de l'eau parce que les fibres de bois (bouleau ou tremble) semblent etre enveloppees par la matrice sur la surface des composites, comme le montrent des images prises au microscope electronique a balayage MEB. (6) Le modele de Lavengoof-Goettler predit mieux le module elastique du composite thermoplastique/bois. (7) Le HDPE renforce par 40 % de bouleau est mieux adapte pour la fabrication des engrenages, car le retrait est moins important lors du refroidissement au moulage. La simulation numerique semble mieux predire la temperature d'equilibre a la vitesse de 500 tr/min; alors qu'a 1000 tr/min, on remarque une divergence du modele. (Abstract shortened by ProQuest.). None None None None None None None None
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
Model-Based Linkage Analysis of a Quantitative Trait.
Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H
2017-01-01
Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
6 Principles for Quantitative Reasoning and Modeling
ERIC Educational Resources Information Center
Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye
2014-01-01
Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…
A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction
ERIC Educational Resources Information Center
Cohen, Anat; Nachmias, Rafi
2006-01-01
This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
NASA Technical Reports Server (NTRS)
Shortle, John F.; Allocco, Michael
2005-01-01
This paper describes a scenario-driven hazard analysis process to identify, eliminate, and control safety-related risks. Within this process, we develop selective criteria to determine the applicability of applying engineering modeling to hypothesized hazard scenarios. This provides a basis for evaluating and prioritizing the scenarios as candidates for further quantitative analysis. We have applied this methodology to proposed concepts of operations for reduced wake separation for closely spaced parallel runways. For arrivals, the process identified 43 core hazard scenarios. Of these, we classified 12 as appropriate for further quantitative modeling, 24 that should be mitigated through controls, recommendations, and / or procedures (that is, scenarios not appropriate for quantitative modeling), and 7 that have the lowest priority for further analysis.
The application of time series models to cloud field morphology analysis
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.
Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo
2018-05-01
This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and established risk factors, potentially representing an important step forward in the translation of quantitative CMR perfusion analysis to the clinical setting. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin
2018-05-01
The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.
Integrated Environmental Modeling: Quantitative Microbial Risk Assessment
The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
QUANTITATIVE PROCEDURES FOR NEUROTOXICOLOGY RISK ASSESSMENT
In this project, previously published information on biologically based dose-response model for brain development was used to quantitatively evaluate critical neurodevelopmental processes, and to assess potential chemical impacts on early brain development. This model has been ex...
Nie, Quandeng; Xu, Xiaoyi; Zhang, Qi; Ma, Yuying; Yin, Zheng; Shang, Luqing
2018-06-07
A three-dimensional quantitative structure-activity relationships model of enterovirus A71 3C protease inhibitors was constructed in this study. The protein-ligand interaction fingerprint was analyzed to generate a pharmacophore model. A predictive and reliable three-dimensional quantitative structure-activity relationships model was built based on the Flexible Alignment of AutoGPA. Moreover, three novel compounds (I-III) were designed and evaluated for their biochemical activity against 3C protease and anti-enterovirus A71 activity in vitro. III exhibited excellent inhibitory activity (IC 50 =0.031 ± 0.005 μM, EC 50 =0.036 ± 0.007 μM). Thus, this study provides a useful quantitative structure-activity relationships model to develop potent inhibitors for enterovirus A71 3C protease. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
ERIC Educational Resources Information Center
Lee, Young-Jin
2017-01-01
Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…
Quantitative genetic methods depending on the nature of the phenotypic trait.
de Villemereuil, Pierre
2018-01-24
A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.
A Systematic Quantitative-Qualitative Model: How To Evaluate Professional Services
ERIC Educational Resources Information Center
Yoda, Koji
1973-01-01
The proposed evaluation model provides for the assignment of relative weights to each criterion, and establishes a weighting system for calculating a quantitative-qualitative raw score for each service activity of a faculty member being reviewed. (Author)
A quantitative risk-based model for reasoning over critical system properties
NASA Technical Reports Server (NTRS)
Feather, M. S.
2002-01-01
This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.
Ezard, Thomas H.G.; Jørgensen, Peter S.; Zimmerman, Naupaka; Chamberlain, Scott; Salguero-Gómez, Roberto; Curran, Timothy J.; Poisot, Timothée
2014-01-01
Proficiency in mathematics and statistics is essential to modern ecological science, yet few studies have assessed the level of quantitative training received by ecologists. To do so, we conducted an online survey. The 937 respondents were mostly early-career scientists who studied biology as undergraduates. We found a clear self-perceived lack of quantitative training: 75% were not satisfied with their understanding of mathematical models; 75% felt that the level of mathematics was “too low” in their ecology classes; 90% wanted more mathematics classes for ecologists; and 95% more statistics classes. Respondents thought that 30% of classes in ecology-related degrees should be focused on quantitative disciplines, which is likely higher than for most existing programs. The main suggestion to improve quantitative training was to relate theoretical and statistical modeling to applied ecological problems. Improving quantitative training will require dedicated, quantitative classes for ecology-related degrees that contain good mathematical and statistical practice. PMID:24688862
Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.
Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2018-04-01
The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Dong-tao; Ling, Chang-quan; Zhu, De-zeng
2007-07-01
To establish a quantitative model for evaluating the degree of the TCM basic syndromes often encountered in patients with primary liver cancer (PLC). Medical literatures concerning the clinical investigation and TCM syndrome of PLC were collected and analyzed adopting expert-composed symposium method, and the 100 millimeter scaling was applied in combining with scoring on degree of symptoms to establish a quantitative criterion for symptoms and signs degree classification in patients with PLC. Two models, i.e. the additive model and the additive-multiplicative model, were established by using comprehensive analytic hierarchy process (AHP) as the mathematical tool to estimate the weight of the criterion for evaluating basic syndromes in various layers by specialists. Then the two models were verified in clinical practice and the outcomes were compared with that fuzzy evaluated by specialists. Verification on 459 times/case of PLC showed that the coincidence rate between the outcomes derived from specialists with that from the additive model was 84.53 %, and with that from the additive-multificative model was 62.75 %, the difference between the two showed statistical significance (P<0.01). It could be decided that the additive model is the principle model suitable for quantitative evaluation on the degree of TCM basic syndromes in patients with PLC.
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...
Rao, Rohit T; Scherholz, Megerle L; Hartmanshenn, Clara; Bae, Seul-A; Androulakis, Ioannis P
2017-12-05
The use of models in biology has become particularly relevant as it enables investigators to develop a mechanistic framework for understanding the operating principles of living systems as well as in quantitatively predicting their response to both pathological perturbations and pharmacological interventions. This application has resulted in a synergistic convergence of systems biology and pharmacokinetic-pharmacodynamic modeling techniques that has led to the emergence of quantitative systems pharmacology (QSP). In this review, we discuss how the foundational principles of chemical process systems engineering inform the progressive development of more physiologically-based systems biology models.
A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes
NASA Astrophysics Data System (ADS)
Olsen, Seth
2012-04-01
We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.
Wang, Kai; Liu, Menglong; Su, Zhongqing; Yuan, Shenfang; Fan, Zheng
2018-08-01
To characterize fatigue cracks, in the undersized stage in particular, preferably in a quantitative and precise manner, a two-dimensional (2D) analytical model is developed for interpreting the modulation mechanism of a "breathing" crack on guided ultrasonic waves (GUWs). In conjunction with a modal decomposition method and a variational principle-based algorithm, the model is capable of analytically depicting the propagating and evanescent waves induced owing to the interaction of probing GUWs with a "breathing" crack, and further extracting linear and nonlinear wave features (e.g., reflection, transmission, mode conversion and contact acoustic nonlinearity (CAN)). With the model, a quantitative correlation between CAN embodied in acquired GUWs and crack parameters (e.g., location and severity) is obtained, whereby a set of damage indices is proposed via which the severity of the crack can be evaluated quantitatively. The evaluation, in principle, does not entail a benchmarking process against baseline signals. As validation, the results obtained from the analytical model are compared with those from finite element simulation, showing good consistency. This has demonstrated accuracy of the developed analytical model in interpreting contact crack-induced CAN, and spotlighted its application to quantitative evaluation of fatigue damage. Copyright © 2018 Elsevier B.V. All rights reserved.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
ERIC Educational Resources Information Center
Kashyap, Upasana; Mathew, Santhosh
2017-01-01
The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…
Han, Lide; Yang, Jian; Zhu, Jun
2007-06-01
A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.
Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E
2014-09-23
Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
USDA-ARS?s Scientific Manuscript database
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
NASA Astrophysics Data System (ADS)
Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.
2016-12-01
Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their interactions using newly developed observational data, models and model outputs, and post-processing tools for making the resulting quantitative storylines most useful in practical hydrology applications.
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
George A. Beitel
2004-02-01
In support of a national need to improve the current state-of-the-art in alerting decision makers to the risk of terrorist attack, a quantitative approach employing scientific and engineering concepts to develop a threat-risk index was undertaken at the Idaho National Engineering and Environmental Laboratory (INEEL). As a result of this effort, a set of models has been successfully integrated into a single comprehensive model known as Quantitative Threat-Risk Index Model (QTRIM), with the capability of computing a quantitative threat-risk index on a system level, as well as for the major components of the system. Such a threat-risk index could providemore » a quantitative variant or basis for either prioritizing security upgrades or updating the current qualitative national color-coded terrorist threat alert.« less
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...
QUANTITATIVE PLUTONIUM MICRODISTRIBUTION IN BONE TISSUE OF VERTEBRA FROM A MAYAK WORKER
Lyovkina, Yekaterina V.; Miller, Scott C.; Romanov, Sergey A.; Krahenbuhl, Melinda P.; Belosokhov, Maxim V.
2010-01-01
The purpose was to obtain quantitative data on plutonium microdistribution in different structural elements of human bone tissue for local dose assessment and dosimetric models validation. A sample of the thoracic vertebra was obtained from a former Mayak worker with a rather high plutonium burden. Additional information was obtained on occupational and exposure history, medical history, and measured plutonium content in organs. Plutonium was detected in bone sections from its fission tracks in polycarbonate film using neutron-induced autoradiography. Quantitative analysis of randomly selected microscopic fields on one of the autoradiographs was performed. Data included fission fragment tracks in different bone tissue and surface areas. Quantitative information on plutonium microdistribution in human bone tissue was obtained for the first time. From these data, quantitative relationship of plutonium decays in bone volume to decays on bone surface in cortical and trabecular fractions were defined as 2.0 and 0.4, correspondingly. The measured quantitative relationship of decays in bone volume to decays on bone surface does not coincide with recommended models for the cortical bone fraction by the International Commission on Radiological Protection. Biokinetic model parameters of extrapulmonary compartments might need to be adjusted after expansion of the data set on quantitative plutonium microdistribution in other bone types in human as well as other cases with different exposure patterns and types of plutonium. PMID:20838087
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
ERIC Educational Resources Information Center
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2016-01-01
Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result,…
Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...
Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions
Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...
This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...
Performance Theories for Sentence Coding: Some Quantitative Models
ERIC Educational Resources Information Center
Aaronson, Doris; And Others
1977-01-01
This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)
Self-calibrating models for dynamic monitoring and diagnosis
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1996-01-01
A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Stenner, A Jackson; Fisher, William P; Stone, Mark H; Burdick, Donald S
2013-01-01
Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained.
Stenner, A. Jackson; Fisher, William P.; Stone, Mark H.; Burdick, Donald S.
2013-01-01
Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained. PMID:23986726
Quantitative self-assembly prediction yields targeted nanomedicines
NASA Astrophysics Data System (ADS)
Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.
2018-02-01
Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Etude thermo-hydraulique de l'ecoulement du moderateur dans le reacteur CANDU-6
NASA Astrophysics Data System (ADS)
Mehdi Zadeh, Foad
Etant donne la taille (6,0 m x 7,6 m) ainsi que le domaine multiplement connexe qui caracterisent la cuve des reacteurs CANDU-6 (380 canaux dans la cuve), la physique qui gouverne le comportement du fluide moderateur est encore mal connue de nos jours. L'echantillonnage de donnees dans un reacteur en fonction necessite d'apporter des changements a la configuration de la cuve du reacteur afin d'y inserer des sondes. De plus, la presence d'une zone intense de radiations empeche l'utilisation des capteurs courants d'echantillonnage. En consequence, l'ecoulement du moderateur doit necessairement etre etudie a l'aide d'un modele experimental ou d'un modele numerique. Pour ce qui est du modele experimental, la fabrication et la mise en fonction de telles installations coutent tres cher. De plus, les parametres de la mise a l'echelle du systeme pour fabriquer un modele experimental a l'echelle reduite sont en contradiction. En consequence, la modelisation numerique reste une alternative importante. Actuellement, l'industrie nucleaire utilise une approche numerique, dite de milieu poreux, qui approxime le domaine par un milieu continu ou le reseau des tubes est remplace par des resistances hydrauliques distribuees. Ce modele est capable de decrire les phenomenes macroscopiques de l'ecoulement, mais ne tient pas compte des effets locaux ayant un impact sur l'ecoulement global, tel que les distributions de temperatures et de vitesses a proximite des tubes ainsi que des instabilites hydrodynamiques. Dans le contexte de la surete nucleaire, on s'interesse aux effets locaux autour des tubes de calandre. En effet, des simulations faites par cette approche predisent que l'ecoulement peut prendre plusieurs configurations hydrodynamiques dont, pour certaines, l'ecoulement montre un comportement asymetrique au sein de la cuve. Ceci peut provoquer une ebullition du moderateur sur la paroi des canaux. Dans de telles conditions, le coefficient de reactivite peut varier de maniere importante, se traduisant par l'accroissement de la puissance du reacteur. Ceci peut avoir des consequences majeures pour la surete nucleaire. Une modelisation CFD (Computational Fluid Dynamics) detaillee tenant compte des effets locaux s'avere donc necessaire. Le but de ce travail de recherche est de modeliser le comportement complexe de l'ecoulement du moderateur au sein de la cuve d'un reacteur nucleaire CANDU-6, notamment a proximite des tubes de calandre. Ces simulations servent a identifier les configurations possibles de l'ecoulement dans la calandre. Cette etude consiste ainsi a formuler des bases theoriques a l'origine des instabilites macroscopiques du moderateur, c.-a-d. des mouvements asymetriques qui peuvent provoquer l'ebullition du moderateur. Le defi du projet est de determiner l'impact de ces configurations de l'ecoulement sur la reactivite du reacteur CANDU-6.
Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.
Interspecies e...
NASA Technical Reports Server (NTRS)
Williams, R. M.; Ryan, M. A.; Saipetch, C.; LeDuc, H. G.
1996-01-01
The exchange current observed at porous metal electrodes on sodium or potassium beta -alumina solid electrolytes in alkali metal vapor is quantitatively modeled with a multi-step process with good agreement with experimental results.
Framework for a Quantitative Systemic Toxicity Model (FutureToxII)
EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...
The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...
Modeling Environmental Impacts on Cognitive Performance for Artificially Intelligent Entities
2017-06-01
of the agent behavior model is presented in a military-relevant virtual game environment. We then outline a quantitative approach to test the agent...relevant virtual game environment. We then outline a quantitative approach to test the agent behavior model within the virtual environment. Results show...x Game View of Hot Environment Condition Displaying Total “f” Cost for Each Searched Waypoint Node
Bergman, Juraj; Mitrikeski, Petar T.
2015-01-01
Summary Sporulation efficiency in the yeast Saccharomyces cerevisiae is a well-established model for studying quantitative traits. A variety of genes and nucleotides causing different sporulation efficiencies in laboratory, as well as in wild strains, has already been extensively characterised (mainly by reciprocal hemizygosity analysis and nucleotide exchange methods). We applied a different strategy in order to analyze the variation in sporulation efficiency of laboratory yeast strains. Coupling classical quantitative genetic analysis with simulations of phenotypic distributions (a method we call phenotype modelling) enabled us to obtain a detailed picture of the quantitative trait loci (QTLs) relationships underlying the phenotypic variation of this trait. Using this approach, we were able to uncover a dominant epistatic inheritance of loci governing the phenotype. Moreover, a molecular analysis of known causative quantitative trait genes and nucleotides allowed for the detection of novel alleles, potentially responsible for the observed phenotypic variation. Based on the molecular data, we hypothesise that the observed dominant epistatic relationship could be caused by the interaction of multiple quantitative trait nucleotides distributed across a 60--kb QTL region located on chromosome XIV and the RME1 locus on chromosome VII. Furthermore, we propose a model of molecular pathways which possibly underlie the phenotypic variation of this trait. PMID:27904371
Dynamic calibration approach for determining catechins and gallic acid in green tea using LC-ESI/MS.
Bedner, Mary; Duewer, David L
2011-08-15
Catechins and gallic acid are antioxidant constituents of Camellia sinensis, or green tea. Liquid chromatography with both ultraviolet (UV) absorbance and electrospray ionization mass spectrometric (ESI/MS) detection was used to determine catechins and gallic acid in three green tea matrix materials that are commonly used as dietary supplements. The results from both detection modes were evaluated with 14 quantitation models, all of which were based on the analyte response relative to an internal standard. Half of the models were static, where quantitation was achieved with calibration factors that were constant over an analysis set. The other half were dynamic, with calibration factors calculated from interpolated response factor data at each time a sample was injected to correct for potential variations in analyte response over time. For all analytes, the relatively nonselective UV responses were found to be very stable over time and independent of the calibrant concentration; comparable results with low variability were obtained regardless of the quantitation model used. Conversely, the highly selective MS responses were found to vary both with time and as a function of the calibrant concentration. A dynamic quantitation model based on polynomial data-fitting was used to reduce the variability in the quantitative results using the MS data.
A quantitative model of optimal data selection in Wason's selection task.
Hattori, Masasi
2002-10-01
The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Satake, M.; Kusumoto, M.; Tsuchida, T.; Aokage, K.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2014-03-01
In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus
2016-03-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
NASA Astrophysics Data System (ADS)
Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng
2017-05-01
As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.
A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.
Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R
2011-10-01
It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua
2017-10-01
Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.
Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.
Moray, Neville; Groeger, John; Stanton, Neville
2017-02-01
This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.
Review of GEM Radiation Belt Dropout and Buildup Challenges
NASA Astrophysics Data System (ADS)
Tu, Weichao; Li, Wen; Morley, Steve; Albert, Jay
2017-04-01
In Summer 2015 the US NSF GEM (Geospace Environment Modeling) focus group named "Quantitative Assessment of Radiation Belt Modeling" started the "RB dropout" and "RB buildup" challenges, focused on quantitative modeling of the radiation belt buildups and dropouts. This is a community effort which includes selecting challenge events, gathering model inputs that are required to model the radiation belt dynamics during these events (e.g., various magnetospheric waves, plasmapause and density models, electron phase space density data), simulating the challenge events using different types of radiation belt models, and validating the model results by comparison to in situ observations of radiation belt electrons (from Van Allen Probes, THEMIS, GOES, LANL/GEO, etc). The goal is to quantitatively assess the relative importance of various acceleration, transport, and loss processes in the observed radiation belt dropouts and buildups. Since 2015, the community has selected four "challenge" events under four different categories: "storm-time enhancements", "non-storm enhancements", "storm-time dropouts", and "non-storm dropouts". Model inputs and data for each selected event have been coordinated and shared within the community to establish a common basis for simulations and testing. Modelers within and outside US with different types of radiation belt models (diffusion-type, diffusion-convection-type, test particle codes, etc.) have participated in our challenge and shared their simulation results and comparison with spacecraft measurements. Significant progress has been made in quantitative modeling of the radiation belt buildups and dropouts as well as accessing the modeling with new measures of model performance. In this presentation, I will review the activities from our "RB dropout" and "RB buildup" challenges and the progresses achieved in understanding radiation belt physics and improving model validation and verification.
Modeling of Receptor Tyrosine Kinase Signaling: Computational and Experimental Protocols.
Fey, Dirk; Aksamitiene, Edita; Kiyatkin, Anatoly; Kholodenko, Boris N
2017-01-01
The advent of systems biology has convincingly demonstrated that the integration of experiments and dynamic modelling is a powerful approach to understand the cellular network biology. Here we present experimental and computational protocols that are necessary for applying this integrative approach to the quantitative studies of receptor tyrosine kinase (RTK) signaling networks. Signaling by RTKs controls multiple cellular processes, including the regulation of cell survival, motility, proliferation, differentiation, glucose metabolism, and apoptosis. We describe methods of model building and training on experimentally obtained quantitative datasets, as well as experimental methods of obtaining quantitative dose-response and temporal dependencies of protein phosphorylation and activities. The presented methods make possible (1) both the fine-grained modeling of complex signaling dynamics and identification of salient, course-grained network structures (such as feedback loops) that bring about intricate dynamics, and (2) experimental validation of dynamic models.
Wu, Wensheng; Zhang, Canyang; Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments.
Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Zhang, Yonghe
2010-01-01
Ionocovalency (IC), a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table. PMID:21151444
A quantitative description for efficient financial markets
NASA Astrophysics Data System (ADS)
Immonen, Eero
2015-09-01
In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.
Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...
An experimental approach to identify dynamical models of transcriptional regulation in living cells
NASA Astrophysics Data System (ADS)
Fiore, G.; Menolascina, F.; di Bernardo, M.; di Bernardo, D.
2013-06-01
We describe an innovative experimental approach, and a proof of principle investigation, for the application of System Identification techniques to derive quantitative dynamical models of transcriptional regulation in living cells. Specifically, we constructed an experimental platform for System Identification based on a microfluidic device, a time-lapse microscope, and a set of automated syringes all controlled by a computer. The platform allows delivering a time-varying concentration of any molecule of interest to the cells trapped in the microfluidics device (input) and real-time monitoring of a fluorescent reporter protein (output) at a high sampling rate. We tested this platform on the GAL1 promoter in the yeast Saccharomyces cerevisiae driving expression of a green fluorescent protein (Gfp) fused to the GAL1 gene. We demonstrated that the System Identification platform enables accurate measurements of the input (sugars concentrations in the medium) and output (Gfp fluorescence intensity) signals, thus making it possible to apply System Identification techniques to obtain a quantitative dynamical model of the promoter. We explored and compared linear and nonlinear model structures in order to select the most appropriate to derive a quantitative model of the promoter dynamics. Our platform can be used to quickly obtain quantitative models of eukaryotic promoters, currently a complex and time-consuming process.
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...
Investigation of a redox-sensitive predictive model of mouse embryonic stem cell differentiation via quantitative nuclease protection assays and glutathione redox status Chandler KJ,Hansen JM, Knudsen T,and Hunter ES 1. U.S. Environmental Protection Agency, Research Triangl...
Conflicts Management Model in School: A Mixed Design Study
ERIC Educational Resources Information Center
Dogan, Soner
2016-01-01
The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…
ERIC Educational Resources Information Center
Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.
2002-01-01
Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…
Physiologically based pharmacokinetic (PBPK) models bridge the gap between in vitro assays and in vivo effects by accounting for the adsorption, distribution, metabolism, and excretion of xenobiotics, which is especially useful in the assessment of human toxicity. Quantitative st...
Civic Engagement Measures for Latina/o College Students
ERIC Educational Resources Information Center
Alcantar, Cynthia M.
2014-01-01
This chapter uses a critical quantitative approach to study models and measures of civic engagement for Latina/o college students. The chapter describes the importance of a critical quantitative approach to study civic engagement of Latina/o college students, then uses Hurtado et al.'s (Hurtado, S., 2012) model to examine the civic engagement…
Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)
EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... Evaluation and Research (CBER) and suggestions for further development. The public workshop will include... Evaluation and Research (HFM-210), Food and Drug Administration, 1401 Rockville Pike, suite 200N, Rockville... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...
Integration of Social Sciences in Terrorism Modelling: Issues, Problems and Recommendations
2007-02-01
qualitative social research : empirical data, patterns, regularities and case studies Terrorism emergence: causes...quantitative and qualitative methods in studies of terrorism, mass violence and conflicts, suggested models of human behaviour response to the threat of...epistemology of social research , demographics, quantitative sociological research , qualitative social research , cultural studies , etc.) can contribute
2013-06-01
measuring numerical risk to the government ( Galway , 2004). However, quantitative risk analysis is rarely utilized in DoD acquisition programs because the...quantitative assessment of the EVMS itself. Galway (2004) practically linked project quantitative risk assessment to EVM by focusing on cost...Kindle version]. Retrieved from Amazon.com 83 Galway , L. (2004, February). Quantitative risk analysis for project management: A critical review
Li, Lin; Xu, Shuo; An, Xin; Zhang, Lu-Da
2011-10-01
In near infrared spectral quantitative analysis, the precision of measured samples' chemical values is the theoretical limit of those of quantitative analysis with mathematical models. However, the number of samples that can obtain accurately their chemical values is few. Many models exclude the amount of samples without chemical values, and consider only these samples with chemical values when modeling sample compositions' contents. To address this problem, a semi-supervised LS-SVR (S2 LS-SVR) model is proposed on the basis of LS-SVR, which can utilize samples without chemical values as well as those with chemical values. Similar to the LS-SVR, to train this model is equivalent to solving a linear system. Finally, the samples of flue-cured tobacco were taken as experimental material, and corresponding quantitative analysis models were constructed for four sample compositions' content(total sugar, reducing sugar, total nitrogen and nicotine) with PLS regression, LS-SVR and S2 LS-SVR. For the S2 LS-SVR model, the average relative errors between actual values and predicted ones for the four sample compositions' contents are 6.62%, 7.56%, 6.11% and 8.20%, respectively, and the correlation coefficients are 0.974 1, 0.973 3, 0.923 0 and 0.948 6, respectively. Experimental results show the S2 LS-SVR model outperforms the other two, which verifies the feasibility and efficiency of the S2 LS-SVR model.
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon
NASA Technical Reports Server (NTRS)
Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.
2017-01-01
The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.
NASA Astrophysics Data System (ADS)
Ingraham, Patrick Jon
This thesis determines the capability of detecting faint companions in the presence of speckle noise when performing space-based high-contrast imaging through spectral differential imagery (SDI) using a low-order Fabry-Perot etalon as a tunable filter. The performance of such a tunable filter is illustrated through the Tunable Filter Imager (TFI), an instrument designed for the James Webb Space Telescope (JWST). Using a TFI prototype etalon and a custom designed test bed, the etalon's ability to perform speckle-suppression through SDI is demonstrated experimentally. Improvements in contrast vary with separation, ranging from a factor of ˜10 at working angles greater than 11 lambda/D and increasing up to a factor of ˜60 at 5 lambda/D. These measurements are consistent with a Fresnel optical propagation model which shows the speckle suppression capability is limited by the test bed and not the etalon. This result demonstrates that a tunable filter is an attractive option to perform high-contrast imaging through SDI. To explore the capability of space-based SDI using an etalon, we perform an end-to-end Fresnel propagation of JWST and TFI. Using this simulation, a contrast improvement ranging from a factor of ˜7 to ˜100 is predicted, depending on the instrument's configuration. The performance of roll-subtraction is simulated and compared to that of SDI. The SDI capability of the Near-Infrared Imager and Slitless Spectrograph (NIRISS), the science instrument module to replace TFI in the JWST Fine Guidance Sensor is also determined. Using low resolution, multi-band (0.85-2.4 microm) multi-object spectroscopy, 104 objects towards the central region of the Orion Nebular Cluster have been assigned spectral types including 7 new brown dwarfs, and 4 new planetary mass candidates. These objects are useful for determining the substellar initial mass function and for testing evolutionary and atmospheric models of young stellar and substellar objects. Using the measured H band magnitudes, combined with our determined extinction values, the classified objects are used to create an Hertzsprung-Russell diagram for the cluster. Our results indicate a single epoch of star formation beginning ˜1 Myr ago. The initial mass function of the cluster is derived and found to be consistent with the values determined for other young clusters and the galactic disk.
Caracterisation electrique de materiaux en composite pour fuselages d'avions
NASA Astrophysics Data System (ADS)
Tse, William
2011-12-01
In the last decade or so, the rise of oil price is being felt all over the world. Oil being one of the primary sources of energy highly exploited, it plays a great role in the today's world economy, especially in the transport domain. To remain competitive, companies striving in this domain need therefore to modify their approach in the design phase of new or improved products. In the aerospace industry for example, weight reduction in aircraft structures have become a primordial aspect in the design phase of new models making them lighter and more efficient. In the framework of this project, the research is related to new weight-reduction of structural materials used in aircrafts. As of today, much research effort has been undertaken to find good substitutes to replace the materials presently used (aluminum). Several materials such as aluminum-lithium and carbon fibre composite bring great interest as substitutes. This last one presents superior mechanical properties over aluminum such as lightweight and rigidity; its electrical properties though remain still ambiguous. The objective of this project, proposed by Bombardier Core EMC, is to find a way to characterize the composite in a conventional way that would allow an extraction of its electrical properties (permittivity (epsilonr), conductivity (sigma), etc). In this Master thesis, the existing studies and characterization approaches for the composite material are presented and discussed. These approaches will help anticipate the electrical behaviour of the composite material under test. A comparison between known elements (ex: aluminum) and the composite material will also be tackled in order to gauge its conductivity level, particularly for low frequencies (≈ MHz), and up to high frequencies (≈ 12 GHz). Finally, some tests have been simulated with electromagnetic modelling software in order to reproduce and validate the experimental results. At the end of the thesis, a discussion/conclusion presenting the results and validating their integrity is given. The results enable us to do an estimation of the composite's conductivity and to observe its attenuation properties in function of the frequency. The tests were made with composite laminated panels without wire mesh. The wire mesh here is a copper matrix integrated at the exterior surface of the composite for added electromagnetic protection.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Quantiprot - a Python package for quantitative analysis of protein sequences.
Konopka, Bogumił M; Marciniak, Marta; Dyrka, Witold
2017-07-17
The field of protein sequence analysis is dominated by tools rooted in substitution matrices and alignments. A complementary approach is provided by methods of quantitative characterization. A major advantage of the approach is that quantitative properties defines a multidimensional solution space, where sequences can be related to each other and differences can be meaningfully interpreted. Quantiprot is a software package in Python, which provides a simple and consistent interface to multiple methods for quantitative characterization of protein sequences. The package can be used to calculate dozens of characteristics directly from sequences or using physico-chemical properties of amino acids. Besides basic measures, Quantiprot performs quantitative analysis of recurrence and determinism in the sequence, calculates distribution of n-grams and computes the Zipf's law coefficient. We propose three main fields of application of the Quantiprot package. First, quantitative characteristics can be used in alignment-free similarity searches, and in clustering of large and/or divergent sequence sets. Second, a feature space defined by quantitative properties can be used in comparative studies of protein families and organisms. Third, the feature space can be used for evaluating generative models, where large number of sequences generated by the model can be compared to actually observed sequences.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
Modelling the co-evolution of indirect genetic effects and inherited variability.
Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter
2018-03-28
When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of IGEs and variability, as the regression coefficient can respond to selection. Our simulations show that the model results in increased variability of body weight with increasing competition. When competition decreases, i.e., cooperation evolves, variability becomes significantly smaller. Hence, our model facilitates quantitative genetic studies on the relationship between IGEs and inherited variability. Moreover, our findings suggest that we may have been overlooking an entire level of genetic variation in variability, the one due to IGEs.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
A review of presented mathematical models in Parkinson's disease: black- and gray-box models.
Sarbaz, Yashar; Pourakbari, Hakimeh
2016-06-01
Parkinson's disease (PD), one of the most common movement disorders, is caused by damage to the central nervous system. Despite all of the studies on PD, the formation mechanism of its symptoms remained unknown. It is still not obvious why damage only to the substantia nigra pars compacta, a small part of the brain, causes a wide range of symptoms. Moreover, the causes of brain damages remain to be fully elucidated. Exact understanding of the brain function seems to be impossible. On the other hand, some engineering tools are trying to understand the behavior and performance of complex systems. Modeling is one of the most important tools in this regard. Developing quantitative models for this disease has begun in recent decades. They are very effective not only in better understanding of the disease, offering new therapies, and its prediction and control, but also in its early diagnosis. Modeling studies include two main groups: black-box models and gray-box models. Generally, in the black-box modeling, regardless of the system information, the symptom is only considered as the output. Such models, besides the quantitative analysis studies, increase our knowledge of the disorders behavior and the disease symptoms. The gray-box models consider the involved structures in the symptoms appearance as well as the final disease symptoms. These models can effectively save time and be cost-effective for the researchers and help them select appropriate treatment mechanisms among all possible options. In this review paper, first, efforts are made to investigate some studies on PD quantitative analysis. Then, PD quantitative models will be reviewed. Finally, the results of using such models are presented to some extent.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.
2017-12-01
Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.
Quantitative risk assessment system (QRAS)
NASA Technical Reports Server (NTRS)
Tan, Zhibin (Inventor); Mosleh, Ali (Inventor); Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Chang, Yung-Hsien (Inventor); Groen, Francisco J (Inventor); Swaminathan, Sankaran (Inventor)
2001-01-01
A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.
Mid-Frequency Reverberation Measurements with Full Companion Environmental Support
2014-12-30
acoustic modeling is based on measured stratification and observed wave amplitudes on the New Jersey shelf during the SWARM experiment.3 Ray tracing is...wave model then gives quantitative results for the clutter. 2. Swarm NLIW model and ray tracing Nonlinear internal waves are very common on the...receiver in order to give quantitative clutter to reverberation. To picture the mechanism, a set of rays was launched from a source at range zero and
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Addressing the problems of studying change and the change process, the report argues that sociologists should study coupled changes in qualitative and quantitative outcomes (e.g., marital status and earnings). The author presents a model for sociological studies of change in…
2013-06-30
QUANTITATIVE RISK ANALYSIS The use of quantitative cost risk analysis tools can be valuable in measuring numerical risk to the government ( Galway , 2004...assessment of the EVMS itself. Galway (2004) practically linked project quantitative risk assessment to EVM by focusing on cost, schedule, and...www.amazon.com Galway , L. (2004, February). Quantitative risk analysis for project management: A critical review (RAND Working Paper WR-112-RC
Fielding-Miller, Rebecca; Dunkle, Kristin L; Cooper, Hannah L F; Windle, Michael; Hadley, Craig
2016-01-01
Transactional sex is associated with increased risk of HIV and gender based violence in southern Africa and around the world. However the typical quantitative operationalization, "the exchange of gifts or money for sex," can be at odds with a wide array of relationship types and motivations described in qualitative explorations. To build on the strengths of both qualitative and quantitative research streams, we used cultural consensus models to identify distinct models of transactional sex in Swaziland. The process allowed us to build and validate emic scales of transactional sex, while identifying key informants for qualitative interviews within each model to contextualize women's experiences and risk perceptions. We used logistic and multinomial logistic regression models to measure associations with condom use and social status outcomes. Fieldwork was conducted between November 2013 and December 2014 in the Hhohho and Manzini regions. We identified three distinct models of transactional sex in Swaziland based on 124 Swazi women's emic valuation of what they hoped to receive in exchange for sex with their partners. In a clinic-based survey (n = 406), consensus model scales were more sensitive to condom use than the etic definition. Model consonance had distinct effects on social status for the three different models. Transactional sex is better measured as an emic spectrum of expectations within a relationship, rather than an etic binary relationship type. Cultural consensus models allowed us to blend qualitative and quantitative approaches to create an emicly valid quantitative scale grounded in qualitative context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantitative Procedures for the Assessment of Quality in Higher Education Institutions.
ERIC Educational Resources Information Center
Moran, Tom; Rowse, Glenwood
The development of procedures designed to provide quantitative assessments of quality in higher education institutions are reviewed. These procedures employ a systems framework and utilize quantitative data to compare institutions or programs of similar types with one another. Three major elements essential in the development of models focusing on…
There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena
Tohsato, Yukako; Ho, Kenneth H. L.; Kyoda, Koji; Onami, Shuichi
2016-01-01
Motivation: Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. Results: We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus. The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. Availability and Implementation: SSBD is accessible at http://ssbd.qbic.riken.jp. Contact: sonami@riken.jp PMID:27412095
The new AP Physics exams: Integrating qualitative and quantitative reasoning
NASA Astrophysics Data System (ADS)
Elby, Andrew
2015-04-01
When physics instructors and education researchers emphasize the importance of integrating qualitative and quantitative reasoning in problem solving, they usually mean using those types of reasoning serially and separately: first students should analyze the physical situation qualitatively/conceptually to figure out the relevant equations, then they should process those equations quantitatively to generate a solution, and finally they should use qualitative reasoning to check that answer for plausibility (Heller, Keith, & Anderson, 1992). The new AP Physics 1 and 2 exams will, of course, reward this approach to problem solving. But one kind of free response question will demand and reward a further integration of qualitative and quantitative reasoning, namely mathematical modeling and sense-making--inventing new equations to capture a physical situation and focusing on proportionalities, inverse proportionalities, and other functional relations to infer what the equation ``says'' about the physical world. In this talk, I discuss examples of these qualitative-quantitative translation questions, highlighting how they differ from both standard quantitative and standard qualitative questions. I then discuss the kinds of modeling activities that can help AP and college students develop these skills and habits of mind.
SSBD: a database of quantitative data of spatiotemporal dynamics of biological phenomena.
Tohsato, Yukako; Ho, Kenneth H L; Kyoda, Koji; Onami, Shuichi
2016-11-15
Rapid advances in live-cell imaging analysis and mathematical modeling have produced a large amount of quantitative data on spatiotemporal dynamics of biological objects ranging from molecules to organisms. There is now a crucial need to bring these large amounts of quantitative biological dynamics data together centrally in a coherent and systematic manner. This will facilitate the reuse of this data for further analysis. We have developed the Systems Science of Biological Dynamics database (SSBD) to store and share quantitative biological dynamics data. SSBD currently provides 311 sets of quantitative data for single molecules, nuclei and whole organisms in a wide variety of model organisms from Escherichia coli to Mus musculus The data are provided in Biological Dynamics Markup Language format and also through a REST API. In addition, SSBD provides 188 sets of time-lapse microscopy images from which the quantitative data were obtained and software tools for data visualization and analysis. SSBD is accessible at http://ssbd.qbic.riken.jp CONTACT: sonami@riken.jp. © The Author 2016. Published by Oxford University Press.
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-01-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Quantifying the vascular response to ischemia with speckle variance optical coherence tomography
Poole, Kristin M.; McCormack, Devin R.; Patil, Chetan A.; Duvall, Craig L.; Skala, Melissa C.
2014-01-01
Longitudinal monitoring techniques for preclinical models of vascular remodeling are critical to the development of new therapies for pathological conditions such as ischemia and cancer. In models of skeletal muscle ischemia in particular, there is a lack of quantitative, non-invasive and long term assessment of vessel morphology. Here, we have applied speckle variance optical coherence tomography (OCT) methods to quantitatively assess vascular remodeling and growth in a mouse model of peripheral arterial disease. This approach was validated on two different mouse strains known to have disparate rates and abilities of recovering following induction of hind limb ischemia. These results establish the potential for speckle variance OCT as a tool for quantitative, preclinical screening of pro- and anti-angiogenic therapies. PMID:25574425
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
NASA Astrophysics Data System (ADS)
Girard-Lauriault, Pierre-Luc
Nitrogen (N)-containing polymer surfaces are attractive in numerous technological contexts, for example in biomedical applications. Here, we have used an atmospheric-pressure dielectric barrier discharge (DBD) apparatus to deposit novel families of N-rich plasma polymers, designated PP:N, using mixtures of three different hydrocarbon precursors (methane, ethylene, and acetylene) in nitrogen at varying respective gas flow ratios, typically parts per thousand. In preparation for subsequent cell-surface interaction studies, the first part of this research focuses on the chemical mapping of those materials, with specific attention to (semi)- quantitative analyses of functional groups. Well-established and some lesser-known analytical techniques have been combined to provide the best possible chemical and structural characterisations of these three families of PP:N thin films; namely, X-ray photoelectron spectroscopy (XPS), Near-edge X-ray absorption fine structure (NEXAFS), Fourier transform infrared spectroscopy (FTIR), contact angle goniometry (CAG), and elemental analysis (EA). High, "tunable" total nitrogen content was measured by both XPS and EA (between 6% and 25% by EA, or between 10% and 40% by XPS, which cannot detect hydrogen). Chemical derivatisation with 4-trifluoromethylbenzaldehyde (TFBA) enabled measurements of primary amine concentrations, the functionality of greatest bio-technological interest, which were found to account for 5 % to 20 % of the total bound nitrogen. By combining the above-mentioned complementary methods, we were further able to determine the complete chemical formulae, the degrees of unsaturation, and other major chemical functionalities in PP:N film structures. Several of these features are believed to be without precedents in the literature on hydrocarbon plasma polymers, for example measurements of absolute compositions (including hydrogen), and of unsaturation. It was shown that besides amines, nitriles, isonitriles and imines are the main nitrogenated functional groups in those materials. In a second part of this work, we have studied the interraction of these well-characterised surfaces with living cells. We have first demonstrated the adhesion, on both uniformly coated and micro-patterned PP:N deposits on BOPP, of three different cell types, namely, growth plate and articular chondrocytes, as well as U937 monocytes, the latter of which do not adhere at all to synthetic polymers used in tissue culture. In an effort to gain insight into cell adhesion mechanisms, we conducted a series of experiments where we cultured U937 monocytes on PP:N, as well as on two other families of chemically well-characterised N-rich thin films, the latter deposited by low pressure RF plasma and by vacuum ultra-violet (VUV) photo-polymerisation ("PVP:N" films). It was first shown that there exist sharply-defined ("critical") surface-chemical conditions that are necessary to induce cell adhesion. By comparing the extensively-characterised film chemistries at the " critical " conditions, we have clearly demonstrated the dominant role of primary amines in the cell adhesion mechanism. In the final aspect of this work, quantitative real-time reverse transcription-polymerase chain reaction (real-time RT-PCR) experiments were conducted using U937 cells that had been made to adhere on PP:N and PVP:N materials for up to 24h. We have shown that the adhesion of U937 monocytes to PP:N and PVP:N surfaces induced a transient expression of cytokines, markers of macrophage activation, as well as a sustained expression of PPARgamma and ICAM-I, implicated in the adhesion and retention of monocytes. Keywords: biomaterials; dielectric barrier discharges (DBD); deposition; plasma polymerisation; ESCA/XPS; NEXAFS; FTIR; primary amines; cell adhesion; gene expression.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
Doshi, Ankur M; Ream, Justin M; Kierans, Andrea S; Bilbily, Matthew; Rusinek, Henry; Huang, William C; Chandarana, Hersh
2016-03-01
The purpose of this study was to determine whether qualitative and quantitative MRI feature analysis is useful for differentiating type 1 from type 2 papillary renal cell carcinoma (PRCC). This retrospective study included 21 type 1 and 17 type 2 PRCCs evaluated with preoperative MRI. Two radiologists independently evaluated various qualitative features, including signal intensity, heterogeneity, and margin. For the quantitative analysis, a radiology fellow and a medical student independently drew 3D volumes of interest over the entire tumor on T2-weighted HASTE images, apparent diffusion coefficient parametric maps, and nephrographic phase contrast-enhanced MR images to derive first-order texture metrics. Qualitative and quantitative features were compared between the groups. For both readers, qualitative features with greater frequency in type 2 PRCC included heterogeneous enhancement, indistinct margin, and T2 heterogeneity (all, p < 0.035). Indistinct margins and heterogeneous enhancement were independent predictors (AUC, 0.822). Quantitative analysis revealed that apparent diffusion coefficient, HASTE, and contrast-enhanced entropy were greater in type 2 PRCC (p < 0.05; AUC, 0.682-0.716). A combined quantitative and qualitative model had an AUC of 0.859. Qualitative features within the model had interreader concordance of 84-95%, and the quantitative data had intraclass coefficients of 0.873-0.961. Qualitative and quantitative features can help discriminate between type 1 and type 2 PRCC. Quantitative analysis may capture useful information that complements the qualitative appearance while benefiting from high interobserver agreement.
Making predictions of mangrove deforestation: a comparison of two methods in Kenya.
Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A
2013-11-01
Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.
Deployment of e-health services - a business model engineering strategy.
Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R
2010-01-01
We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.
Hallow, K M; Gebremichael, Y
2017-06-01
Renal function plays a central role in cardiovascular, kidney, and multiple other diseases, and many existing and novel therapies act through renal mechanisms. Even with decades of accumulated knowledge of renal physiology, pathophysiology, and pharmacology, the dynamics of renal function remain difficult to understand and predict, often resulting in unexpected or counterintuitive therapy responses. Quantitative systems pharmacology modeling of renal function integrates this accumulated knowledge into a quantitative framework, allowing evaluation of competing hypotheses, identification of knowledge gaps, and generation of new experimentally testable hypotheses. Here we present a model of renal physiology and control mechanisms involved in maintaining sodium and water homeostasis. This model represents the core renal physiological processes involved in many research questions in drug development. The model runs in R and the code is made available. In a companion article, we present a case study using the model to explore mechanisms and pharmacology of salt-sensitive hypertension. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S
2016-01-01
Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.
Chen, Ming; Wu, Si; Lu, Haidong D.; Roe, Anna W.
2013-01-01
Interpreting population responses in the primary visual cortex (V1) remains a challenge especially with the advent of techniques measuring activations of large cortical areas simultaneously with high precision. For successful interpretation, a quantitatively precise model prediction is of great importance. In this study, we investigate how accurate a spatiotemporal filter (STF) model predicts average response profiles to coherently drifting random dot motion obtained by optical imaging of intrinsic signals in V1 of anesthetized macaques. We establish that orientation difference maps, obtained by subtracting orthogonal axis-of-motion, invert with increasing drift speeds, consistent with the motion streak effect. Consistent with perception, the speed at which the map inverts (the critical speed) depends on cortical eccentricity and systematically increases from foveal to parafoveal. We report that critical speeds and response maps to drifting motion are excellently reproduced by the STF model. Our study thus suggests that the STF model is quantitatively accurate enough to be used as a first model of choice for interpreting responses obtained with intrinsic imaging methods in V1. We show further that this good quantitative correspondence opens the possibility to infer otherwise not easily accessible population receptive field properties from responses to complex stimuli, such as drifting random dot motions. PMID:23197457
Li, Wen-xia; Li, Feng; Zhao, Guo-liang; Tang, Shi-jun; Liu, Xiao-ying
2014-12-01
A series of 376 cotton-polyester (PET) blend fabrics were studied by a portable near-infrared (NIR) spectrometer. A NIR semi-quantitative-qualitative calibration model was established by Partial Least Squares (PLS) method combined with qualitative identification coefficient. In this process, PLS method in a quantitative analysis was used as a correction method, and the qualitative identification coefficient was set by the content of cotton and polyester in blend fabrics. Cotton-polyester blend fabrics were identified qualitatively by the model and their relative contents were obtained quantitatively, the model can be used for semi-quantitative identification analysis. In the course of establishing the model, the noise and baseline drift of the spectra were eliminated by Savitzky-Golay(S-G) derivative. The influence of waveband selection and different pre-processing method was also studied in the qualitative calibration model. The major absorption bands of 100% cotton samples were in the 1400~1600 nm region, and the one for 100% polyester were around 1600~1800 nm, the absorption intensity was enhancing with the content increasing of cotton or polyester. Therefore, the cotton-polyester's major absorption region was selected as the base waveband, the optimal waveband (1100~2500 nm) was found by expanding the waveband in two directions (the correlation coefficient was 0.6, and wave-point number was 934). The validation samples were predicted by the calibration model, the results showed that the model evaluation parameters was optimum in the 1100~2500 nm region, and the combination of S-G derivative, multiplicative scatter correction (MSC) and mean centering was used as the pre-processing method. RC (relational coefficient of calibration) value was 0.978, RP (relational coefficient of prediction) value was 0.940, SEC (standard error of calibration) value was 1.264, SEP (standard error of prediction) value was 1.590, and the sample's recognition accuracy was up to 93.4%. It showed that the cotton-polyester blend fabrics could be predicted by the semi-quantitative-qualitative calibration model.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Quantitative modeling of soil genesis processes
NASA Technical Reports Server (NTRS)
Levine, E. R.; Knox, R. G.; Kerber, A. G.
1992-01-01
For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.
ERIC Educational Resources Information Center
Flanagan, K. M.; Einarson, J.
2017-01-01
In a world filled with big data, mathematical models, and statistics, the development of strong quantitative skills is becoming increasingly critical for modern biologists. Teachers in this field must understand how students acquire quantitative skills and explore barriers experienced by students when developing these skills. In this study, we…
Code of Federal Regulations, 2014 CFR
2014-07-01
... PM2.5 violations”) must be based on quantitative analysis using the applicable air quality models... either: (i) Quantitative methods that represent reasonable and common professional practice; or (ii) A...) The hot-spot demonstration required by § 93.116 must be based on quantitative analysis methods for the...
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
Models of volcanic eruption hazards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less
Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography
NASA Astrophysics Data System (ADS)
Revel, G. M.; Pandarese, G.; Cavuto, A.
2012-06-01
The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.
Models of volcanic eruption hazards
NASA Astrophysics Data System (ADS)
Wohletz, K. H.
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
A Method for Label-Free, Differential Top-Down Proteomics.
Ntai, Ioanna; Toby, Timothy K; LeDuc, Richard D; Kelleher, Neil L
2016-01-01
Biomarker discovery in the translational research has heavily relied on labeled and label-free quantitative bottom-up proteomics. Here, we describe a new approach to biomarker studies that utilizes high-throughput top-down proteomics and is the first to offer whole protein characterization and relative quantitation within the same experiment. Using yeast as a model, we report procedures for a label-free approach to quantify the relative abundance of intact proteins ranging from 0 to 30 kDa in two different states. In this chapter, we describe the integrated methodology for the large-scale profiling and quantitation of the intact proteome by liquid chromatography-mass spectrometry (LC-MS) without the need for metabolic or chemical labeling. This recent advance for quantitative top-down proteomics is best implemented with a robust and highly controlled sample preparation workflow before data acquisition on a high-resolution mass spectrometer, and the application of a hierarchical linear statistical model to account for the multiple levels of variance contained in quantitative proteomic comparisons of samples for basic and clinical research.
A quantification model for the structure of clay materials.
Tang, Liansheng; Sang, Haitao; Chen, Haokun; Sun, Yinlei; Zhang, Longjian
2016-07-04
In this paper, the quantification for clay structure is explicitly explained, and the approach and goals of quantification are also discussed. The authors consider that the purpose of the quantification for clay structure is to determine some parameters that can be used to quantitatively characterize the impact of clay structure on the macro-mechanical behaviour. According to the system theory and the law of energy conservation, a quantification model for the structure characteristics of clay materials is established and three quantitative parameters (i.e., deformation structure potential, strength structure potential and comprehensive structure potential) are proposed. And the corresponding tests are conducted. The experimental results show that these quantitative parameters can accurately reflect the influence of clay structure on the deformation behaviour, strength behaviour and the relative magnitude of structural influence on the above two quantitative parameters, respectively. These quantitative parameters have explicit mechanical meanings, and can be used to characterize the structural influences of clay on its mechanical behaviour.
An approach to the development of quantitative models to assess the effects of exposure to environmentally relevant levels of endocrine disruptors on homeostasis in adults.
Ben-Jonathan N, Cooper RL, Foster P, Hughes CL, Hoyer PB, Klotz D, Kohn M, Lamb DJ, Stancel GM.
<...
ERIC Educational Resources Information Center
Wolusky, G. Anthony
2016-01-01
This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…
ERIC Educational Resources Information Center
Ulu, Mustafa
2017-01-01
This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…
Barnett, Carolina; Merkies, Ingemar S J; Katzberg, Hans; Bril, Vera
2015-09-02
The Quantitative Myasthenia Gravis Score and the Myasthenia Gravis Composite are two commonly used outcome measures in Myasthenia Gravis. So far, their measurement properties have not been compared, so we aimed to study their psychometric properties using the Rasch model. 251 patients with stable myasthenia gravis were assessed with both scales, and 211 patients returned for a second assessment. We studied fit to the Rasch model at the first visit, and compared item fit, thresholds, differential item functioning, local dependence, person separation index, and tests for unidimensionality. We also assessed test-retest reliability and estimated the Minimal Detectable Change. Neither scale fit the Rasch model (X2p < 0.05). The Myasthenia Gravis Composite had lower discrimination properties than the Quantitative Myasthenia Gravis Scale (Person Separation Index: 0.14 and 0.7). There was local dependence in both scales, as well as differential item functioning for ocular and generalized disease. Disordered thresholds were found in 6(60%) items of the Myasthenia Gravis Composite and in 4(31%) of the Quantitative Myasthenia Gravis Score. Both tools had adequate test-retest reliability (ICCs >0.8). The minimally detectable change was 4.9 points for the Myasthenia Gravis Composite and 4.3 points for the Quantitative Myasthenia Gravis Score. Neither scale fulfilled Rasch model expectations. The Quantitative Myasthenia Gravis Score has higher discrimination than the Myasthenia Gravis Composite. Both tools have items with disordered thresholds, differential item functioning and local dependency. There was evidence of multidimensionality in the QMGS. The minimal detectable change values are higher than previous studies on the minimal significant change. These findings might inform future modifications of these tools.
Chirumbolo, Antonio; Urbini, Flavio; Callea, Antonino; Lo Presti, Alessandro; Talamo, Alessandra
2017-01-01
One of the more visible effects of the societal changes is the increased feelings of uncertainty in the workforce. In fact, job insecurity represents a crucial occupational risk factor and a major job stressor that has negative consequences on both organizational well-being and individual health. Many studies have focused on the consequences about the fear and the perception of losing the job as a whole (called quantitative job insecurity), while more recently research has begun to examine more extensively the worries and the perceptions of losing valued job features (called qualitative job insecurity). The vast majority of the studies, however, have investigated the effects of quantitative and qualitative job insecurity separately. In this paper, we proposed the Job Insecurity Integrated Model aimed to examine the effects of quantitative job insecurity and qualitative job insecurity on their short-term and long-term outcomes. This model was empirically tested in two independent studies, hypothesizing that qualitative job insecurity mediated the effects of quantitative job insecurity on different outcomes, such as work engagement and organizational identification (Study 1), and job satisfaction, commitment, psychological stress and turnover intention (Study 2). Study 1 was conducted on 329 employees in private firms, while Study 2 on 278 employees in both public sector and private firms. Results robustly showed that qualitative job insecurity totally mediated the effects of quantitative on all the considered outcomes. By showing that the effects of quantitative job insecurity on its outcomes passed through qualitative job insecurity, the Job Insecurity Integrated Model contributes to clarifying previous findings in job insecurity research and puts forward a framework that could profitably produce new investigations with important theoretical and practical implications. PMID:29250013
SYN-JEM: A Quantitative Job-Exposure Matrix for Five Lung Carcinogens.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2016-08-01
The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic hydrocarbons (by its proxy benzo(a)pyrene (BaP)) and respirable crystalline silica). A quantitative job-exposure matrix (JEM) was developed based on statistical modeling of large quantities of personal measurements. Empirical linear models were developed using personal occupational exposure measurements (n = 102306) from Europe and Canada, as well as auxiliary information like job (industry), year of sampling, region, an a priori exposure rating of each job (none, low, and high exposed), sampling and analytical methods, and sampling duration. The model outcomes were used to create a JEM with a quantitative estimate of the level of exposure by job, year, and region. Decreasing time trends were observed for all agents between the 1970s and 2009, ranging from -1.2% per year for personal BaP and nickel exposures to -10.7% for asbestos (in the time period before an asbestos ban was implemented). Regional differences in exposure concentrations (adjusted for measured jobs, years of measurement, and sampling method and duration) varied by agent, ranging from a factor 3.3 for chromium-VI up to a factor 10.5 for asbestos. We estimated time-, job-, and region-specific exposure levels for four (asbestos, chromium-VI, nickel, and RCS) out of five considered lung carcinogens. Through statistical modeling of large amounts of personal occupational exposure measurement data we were able to derive a quantitative JEM to be used in community-based studies. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Chirumbolo, Antonio; Urbini, Flavio; Callea, Antonino; Lo Presti, Alessandro; Talamo, Alessandra
2017-01-01
One of the more visible effects of the societal changes is the increased feelings of uncertainty in the workforce. In fact, job insecurity represents a crucial occupational risk factor and a major job stressor that has negative consequences on both organizational well-being and individual health. Many studies have focused on the consequences about the fear and the perception of losing the job as a whole (called quantitative job insecurity), while more recently research has begun to examine more extensively the worries and the perceptions of losing valued job features (called qualitative job insecurity). The vast majority of the studies, however, have investigated the effects of quantitative and qualitative job insecurity separately. In this paper, we proposed the Job Insecurity Integrated Model aimed to examine the effects of quantitative job insecurity and qualitative job insecurity on their short-term and long-term outcomes. This model was empirically tested in two independent studies, hypothesizing that qualitative job insecurity mediated the effects of quantitative job insecurity on different outcomes, such as work engagement and organizational identification (Study 1), and job satisfaction, commitment, psychological stress and turnover intention (Study 2). Study 1 was conducted on 329 employees in private firms, while Study 2 on 278 employees in both public sector and private firms. Results robustly showed that qualitative job insecurity totally mediated the effects of quantitative on all the considered outcomes. By showing that the effects of quantitative job insecurity on its outcomes passed through qualitative job insecurity, the Job Insecurity Integrated Model contributes to clarifying previous findings in job insecurity research and puts forward a framework that could profitably produce new investigations with important theoretical and practical implications.
Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*
Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.
2011-01-01
The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197
A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
[A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].
Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang
2015-05-01
To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.
NASA Astrophysics Data System (ADS)
Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan
2018-06-01
Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.
Quantitative petri net model of gene regulated metabolic networks in the cell.
Chen, Ming; Hofestädt, Ralf
2011-01-01
A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.
Comprehensive, Quantitative Risk Assessment of CO{sub 2} Geologic Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepinski, James
2013-09-30
A Quantitative Failure Modes and Effects Analysis (QFMEA) was developed to conduct comprehensive, quantitative risk assessments on CO{sub 2} capture, transportation, and sequestration or use in deep saline aquifers, enhanced oil recovery operations, or enhanced coal bed methane operations. The model identifies and characterizes potential risks; identifies the likely failure modes, causes, effects and methods of detection; lists possible risk prevention and risk mitigation steps; estimates potential damage recovery costs, mitigation costs and costs savings resulting from mitigation; and ranks (prioritizes) risks according to the probability of failure, the severity of failure, the difficulty of early failure detection and themore » potential for fatalities. The QFMEA model generates the necessary information needed for effective project risk management. Diverse project information can be integrated into a concise, common format that allows comprehensive, quantitative analysis, by a cross-functional team of experts, to determine: What can possibly go wrong? How much will damage recovery cost? How can it be prevented or mitigated? What is the cost savings or benefit of prevention or mitigation? Which risks should be given highest priority for resolution? The QFMEA model can be tailored to specific projects and is applicable to new projects as well as mature projects. The model can be revised and updated as new information comes available. It accepts input from multiple sources, such as literature searches, site characterization, field data, computer simulations, analogues, process influence diagrams, probability density functions, financial analysis models, cost factors, and heuristic best practices manuals, and converts the information into a standardized format in an Excel spreadsheet. Process influence diagrams, geologic models, financial models, cost factors and an insurance schedule were developed to support the QFMEA model. Comprehensive, quantitative risk assessments were conducted on three (3) sites using the QFMEA model: (1) SACROC Northern Platform CO{sub 2}-EOR Site in the Permian Basin, Scurry County, TX, (2) Pump Canyon CO{sub 2}-ECBM Site in the San Juan Basin, San Juan County, NM, and (3) Farnsworth Unit CO{sub 2}-EOR Site in the Anadarko Basin, Ochiltree County, TX. The sites were sufficiently different from each other to test the robustness of the QFMEA model.« less
NASA Technical Reports Server (NTRS)
Carpenter, Paul; Curreri, Peter A. (Technical Monitor)
2002-01-01
This course will cover practical applications of the energy-dispersive spectrometer (EDS) to x-ray microanalysis. Topics covered will include detector technology, advances in pulse processing, resolution and performance monitoring, detector modeling, peak deconvolution and fitting, qualitative and quantitative analysis, compositional mapping, and standards. An emphasis will be placed on use of the EDS for quantitative analysis, with discussion of typical problems encountered in the analysis of a wide range of materials and sample geometries.
de Croon, E M; Blonk, R; de Zwart, B C H; Frings-Dresen, M; Broersen, J
2002-01-01
Objectives: Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. Methods: From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. Results: The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Conclusions: Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work. PMID:12040108
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
de Croon, E M; Blonk, R W B; de Zwart, B C H; Frings-Dresen, M H W; Broersen, J P J
2002-06-01
Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work.
Julkunen, Petro; Kiviranta, Panu; Wilson, Wouter; Jurvelin, Jukka S; Korhonen, Rami K
2007-01-01
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
Quantitative Systems Pharmacology: A Case for Disease Models.
Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C
2017-01-01
Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.
Kerkhofs, Johan; Geris, Liesbet
2015-01-01
Boolean models have been instrumental in predicting general features of gene networks and more recently also as explorative tools in specific biological applications. In this study we introduce a basic quantitative and a limited time resolution to a discrete (Boolean) framework. Quantitative resolution is improved through the employ of normalized variables in unison with an additive approach. Increased time resolution stems from the introduction of two distinct priority classes. Through the implementation of a previously published chondrocyte network and T helper cell network, we show that this addition of quantitative and time resolution broadens the scope of biological behaviour that can be captured by the models. Specifically, the quantitative resolution readily allows models to discern qualitative differences in dosage response to growth factors. The limited time resolution, in turn, can influence the reachability of attractors, delineating the likely long term system behaviour. Importantly, the information required for implementation of these features, such as the nature of an interaction, is typically obtainable from the literature. Nonetheless, a trade-off is always present between additional computational cost of this approach and the likelihood of extending the model’s scope. Indeed, in some cases the inclusion of these features does not yield additional insight. This framework, incorporating increased and readily available time and semi-quantitative resolution, can help in substantiating the litmus test of dynamics for gene networks, firstly by excluding unlikely dynamics and secondly by refining falsifiable predictions on qualitative behaviour. PMID:26067297
Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.
Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning
2016-10-01
To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
NASA Astrophysics Data System (ADS)
Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid
2015-07-01
The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Quantitative 3D investigation of Neuronal network in mouse spinal cord model
NASA Astrophysics Data System (ADS)
Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.
2017-01-01
The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.
Védy, S; Garnotel, E; Koeck, J-L; Simon, F; Molinier, S; Puidupin, A
2007-11-01
To determinate the origin of acquired S. aureus among hospitalised patients and to evaluate the transmission of strains between health care workers and hopistalised patients. The method chosen is a prospective study in risky clinical yards. Nasal swabing of patients and health care workers has been done to isolate bacterial samples. Caracterisation and comparaison of bacterial strains have been made using their antibiotic resistance profil and a recent molecular genotyping technic named MLVA (Multi Locus Variable Number of Tandem Repeat). It has never been used in such context. One hundred and fifty-seven strains have been isolated. They have been compared while realizing 1900 PCR and agar gel electrophoresis in 10 days. 15 clones were identified. One of them is mainly represented among patient's nasal carriage and acquired strains. As far as antibiotype and agr type are concerned, it is similar to hospital-acquired clone described in Europe with other technics (MRSA, Gentamicine-S agr 1). This clone appears to be also transmitted between health care workers and patients. Although it exists, we can't appreciate the intensity of this transmission. These results don't allow us to proceed to a systematic screening for nasal carriage among our health care workers. This study shows that MLVA could be a reliable molecular typing method, which could be used in every day practice. In our experience, it is as performing as PFGE, more didactic, faster and easier.
NASA Astrophysics Data System (ADS)
Filali, Bilai
Graphene, as an advanced carbon nano-structure, has attracted a deluge of interest of scholars recently because of it's outstanding mechanical, electrical and thermal properties. There are several different ways to synthesis graphene in practical ways, such as Mechanical Exfoliation, Chemical Vapor Deposition (CVD), and Anodic Arc discharge. In this thesis a method of graphene synthesis in plasma will be discussed, in which this synthesis method is supported by the erosion of the anode material. This graphene synthesis method is one of the most practical methods which can provide high production rate. High purity of graphene flakes have been synthesized with an anodic arc method under certain pressure (about 500 torr). Raman spectrometer, Scanning Electron Microscope (SEM), Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM) have been utilized for characterization of the synthesis products. Arc produced graphene and commercially available graphene was compared by those machine and the difference lies in the number of layers, the thicknesses of each layer and the shape of the structure itself. Temperature dependence of the synthesis procedure has been studied. It has been found that the graphene can be produced on a copper foil substrate under temperatures near the melting point of copper. However, with a decrease in substrate temperature yields a transformation of the synthesized graphene into amorphous carbon. Glow discharge was utilized to functionalize grapheme. SEM and EDS observation indicated increases of oxygen content in the graphene after its exposure to glow discharge.
Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P
2014-09-01
Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative methods--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative methods need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative method performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative method is likely to be the strongest in all of the important areas, quantitative methods may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely applied in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA methods may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA methods for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
A quantitative model to assess Social Responsibility in Environmental Science and Technology.
Valcárcel, M; Lucena, R
2014-01-01
The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
NASA Astrophysics Data System (ADS)
James, Jessica
2017-01-01
Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses
Zenker, Sven; Rubin, Jonathan; Clermont, Gilles
2007-01-01
The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590
Li, Yuanpeng; Li, Fucui; Yang, Xinhao; Guo, Liu; Huang, Furong; Chen, Zhenqiang; Chen, Xingdan; Zheng, Shifu
2018-08-05
A rapid quantitative analysis model for determining the glycated albumin (GA) content based on Attenuated total reflectance (ATR)-Fourier transform infrared spectroscopy (FTIR) combining with linear SiPLS and nonlinear SVM has been developed. Firstly, the real GA content in human serum was determined by GA enzymatic method, meanwhile, the ATR-FTIR spectra of serum samples from the population of health examination were obtained. The spectral data of the whole spectra mid-infrared region (4000-600 cm -1 ) and GA's characteristic region (1800-800 cm -1 ) were used as the research object of quantitative analysis. Secondly, several preprocessing steps including first derivative, second derivative, variable standardization and spectral normalization, were performed. Lastly, quantitative analysis regression models were established by using SiPLS and SVM respectively. The SiPLS modeling results are as follows: root mean square error of cross validation (RMSECV T ) = 0.523 g/L, calibration coefficient (R C ) = 0.937, Root Mean Square Error of Prediction (RMSEP T ) = 0.787 g/L, and prediction coefficient (R P ) = 0.938. The SVM modeling results are as follows: RMSECV T = 0.0048 g/L, R C = 0.998, RMSEP T = 0.442 g/L, and R p = 0.916. The results indicated that the model performance was improved significantly after preprocessing and optimization of characteristic regions. While modeling performance of nonlinear SVM was considerably better than that of linear SiPLS. Hence, the quantitative analysis model for GA in human serum based on ATR-FTIR combined with SiPLS and SVM is effective. And it does not need sample preprocessing while being characterized by simple operations and high time efficiency, providing a rapid and accurate method for GA content determination. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantitative Systems Pharmacology: A Case for Disease Models
Ramanujan, S; Schmidt, BJ; Ghobrial, OG; Lu, J; Heatherington, AC
2016-01-01
Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model‐informed drug discovery and development, supporting program decisions from exploratory research through late‐stage clinical trials. In this commentary, we discuss the unique value of disease‐scale “platform” QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. PMID:27709613
Translational PK/PD of Anti-Infective Therapeutics
Rathi, Chetan; Lee, Richard E.; Meibohm, Bernd
2016-01-01
Translational PK/PD modeling has emerged as a critical technique for quantitative analysis of the relationship between dose, exposure and response of antibiotics. By combining model components for pharmacokinetics, bacterial growth kinetics and concentration-dependent drug effects, these models are able to quantitatively capture and simulate the complex interplay between antibiotic, bacterium and host organism. Fine-tuning of these basic model structures allows to further account for complicating factors such as resistance development, combination therapy, or host responses. With this tool set at hand, mechanism-based PK/PD modeling and simulation allows to develop optimal dosing regimens for novel and established antibiotics for maximum efficacy and minimal resistance development. PMID:27978987
NASA Astrophysics Data System (ADS)
Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea
2017-12-01
Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.
NASA Astrophysics Data System (ADS)
Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea
2018-06-01
Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.
Electromagnetic braking: A simple quantitative model
NASA Astrophysics Data System (ADS)
Levin, Yan; da Silveira, Fernando L.; Rizzato, Felipe B.
2006-09-01
A calculation is presented that quantitatively accounts for the terminal velocity of a cylindrical magnet falling through a long copper or aluminum pipe. The experiment and the theory are a dramatic illustration of Faraday's and Lenz's laws.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.
Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H
2017-08-01
With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.
Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L
2017-10-01
The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.
Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K
2018-04-01
Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.
From themes to hypotheses: following up with quantitative methods.
Morgan, David L
2015-06-01
One important category of mixed-methods research designs consists of quantitative studies that follow up on qualitative research. In this case, the themes that serve as the results from the qualitative methods generate hypotheses for testing through the quantitative methods. That process requires operationalization to translate the concepts from the qualitative themes into quantitative variables. This article illustrates these procedures with examples that range from simple operationalization to the evaluation of complex models. It concludes with an argument for not only following up qualitative work with quantitative studies but also the reverse, and doing so by going beyond integrating methods within single projects to include broader mutual attention from qualitative and quantitative researchers who work in the same field. © The Author(s) 2015.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Caballero-Lima, David; Kaneva, Iliyana N.; Watton, Simon P.
2013-01-01
In the hyphal tip of Candida albicans we have made detailed quantitative measurements of (i) exocyst components, (ii) Rho1, the regulatory subunit of (1,3)-β-glucan synthase, (iii) Rom2, the specialized guanine-nucleotide exchange factor (GEF) of Rho1, and (iv) actin cortical patches, the sites of endocytosis. We use the resulting data to construct and test a quantitative 3-dimensional model of fungal hyphal growth based on the proposition that vesicles fuse with the hyphal tip at a rate determined by the local density of exocyst components. Enzymes such as (1,3)-β-glucan synthase thus embedded in the plasma membrane continue to synthesize the cell wall until they are removed by endocytosis. The model successfully predicts the shape and dimensions of the hyphae, provided that endocytosis acts to remove cell wall-synthesizing enzymes at the subapical bands of actin patches. Moreover, a key prediction of the model is that the distribution of the synthase is substantially broader than the area occupied by the exocyst. This prediction is borne out by our quantitative measurements. Thus, although the model highlights detailed issues that require further investigation, in general terms the pattern of tip growth of fungal hyphae can be satisfactorily explained by a simple but quantitative model rooted within the known molecular processes of polarized growth. Moreover, the methodology can be readily adapted to model other forms of polarized growth, such as that which occurs in plant pollen tubes. PMID:23666623
Dispositifs semi-conducteurs pour biodetection photonique et imagerie hyperspectrale
NASA Astrophysics Data System (ADS)
Lepage, Dominic
La creation d'un microsysteme d'analyse biochimique, capable de livrer des diagnostics preliminaires sur la quantification d'elements pathogenes, est un defi multidisciplinaire ayant un impact potentiel important sur la majorite des activites humaines en sante et securite. En effet, un dispositif integre, peu dispendieux et livrant des resultats facilement interpretables, permettrait une vulgarisation des capacites de biodetection a travers differents domaines d'applications societaires et industriels. Le present document se concentre sur l'integration monolithique d'une methode de biocaracterisation dans le but de generer un transducteur miniaturise et efficace, element central d'un microsysteme de detection. Le projet de recherche ici presente vise l'etude de l'applicabilite d'un capteur plasmonique integre par l'entremise de nanostructures semi-conductrices aux proprietes quantiques et luminescentes. L'approche presentee est globale; c'est-a-dire qu'on vise a repondre aux questions fondamentales impliquant la comprehension des phenomenes photoniques, le developpement et la fabrication des dispositifs, les methodes de caracterisations possibles ainsi que l'application d'un transducteur SPR integre a la biodetection. En d'autres termes : dans quelles circonstances et comment un transducteur plasmonique integre doit-il etre realise pour l'application a la detection delocalisee d'elements pathogenes? Dans le but d'engendrer un instrument simple a l'echelle de l'usager, l'integration de la connaissance a l'echelle du design est donc effectuee. Ainsi, des capteurs plasmoniques monolithiques sont concus a l'aide de modeles theoriques ici presentes. Un instrument de mesure hyperspectrale conjuguee permettant de cartographier directement la relation de dispersion des plasmons diffractes a ete construit et teste. Cet instrument est employe a la cartographie d'elements de diffusion. Finalement, une demonstration du fonctionnement du dispositif, appliquee a la biocaracterisation d'evenements simples, tels que l'albumine de serum bovin et la detection d'une souche specifique d'influenza A, est livree. Ceci repond donc a la question de faisabilite d'un nanosysteme plasmonique applicable a la detection de pathogenes. Mots-Clefs: Biocapteur; Plasmons de surface; Diffusion lumineuse; Semi-conducteur quantique; Microscopie conjuguee; Virus Influenza A
NASA Astrophysics Data System (ADS)
Kodjo, Apedovi
The aim of this thesis is to contribute to the non-destructive characterization of concrete materials damaged by alkali-silica reaction (ASR). For this purpose, some nonlinear characterization techniques have been developed, as well as a nonlinear resonance test device. In order to optimize the sensitivity of the test device, the excitation module and signal processing have been improved. The nonlinear tests were conducted on seven samples of concrete damaged by ASR, three samples of concrete damaged by heat, three concrete samples damaged mechanically and three sound concrete samples. Since, nonlinear behaviour of the material is often attribute to its micro-defects hysteretic behaviour, it was shown at first that concrete damaged by ASR exhibits an hysteresis behaviour. To conduct this study, an acoustoelastic test was set, and then nonlinear resonance test device was used for characterizing sound concrete and concrete damaged by ASR. It was shown that the nonlinear technique can be used for characterizing the material without knowing its initial state, and also for detecting early damage in the reactive material. Studies were also carried out on the effect of moisture regarding the nonlinear parameters; they allowed understanding the low values of nonlinear parameters measured on concrete samples that were kept in high moisture conditions. In order to find a specific characteristic of damage caused by ASR, the viscosity of ASR gel was used. An approach, based on static creep analysis, performed on the material, while applying the nonlinear resonance technique. The spring-damping model of Maxwell was used for the interpretation of the results. Then, the creep time was analysed on samples damaged by ASR. It appears that the ASR gel increases the creep time. Finally, the limitations of the nonlinear resonance technique for in situ application have been explained and a new applicable nonlinear technique was initiated. This technique use an external source such as a mass for making non-linearity behaviour in the material, while an ultrasound wave is investigating the medium. Keywords. Concrete, Alkali-silica reaction, Nonlinear acoustics, Nonlinearity, Hysteresis, Damage diagnostics.
Fabrication par injection flexible de pieces coniques pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Shebib Loiselle, Vincent
Les materiaux composites sont presents dans les tuyeres de moteurs spatiaux depuis les annees soixante. Aujourd'hui, l'avenement des tissus tridimensionnels apporte une solution innovatrice au probleme de delamination qui limitait les proprietes mecaniques de ces composites. L'utilisation de ces tissus necessite toutefois la conception de procedes de fabrication mieux adaptes. Une nouvelle methode de fabrication de pieces composites pour des applications aerospatiales a ete etudiee tout au long de ce travail. Celle-ci applique les principes de l'injection flexible (procede Polyflex) a la fabrication de pieces coniques de fortes epaisseurs. La piece de validation a fabriquer represente un modele reduit de piece de tuyere de moteur spatial. Elle est composee d'un renfort tridimensionnel en fibres de carbone et d'une resine phenolique. La reussite du projet est definie par plusieurs criteres sur la compaction et la formation de plis du renfort et sur la formation de porosites de la piece fabriquee. Un grand nombre d'etapes ont ete necessaires avant la fabrication de deux pieces de validation. Premierement, pour repondre au critere sur la compaction du renfort, la conception d'un outil de caracterisation a ete entreprise. L'etude de la compaction a ete effectuee afin d'obtenir les informations necessaires a la comprehension de la deformation d'un renfort 3D axisymetrique. Ensuite, le principe d'injection de la piece a ete defini pour ce nouveau procede. Pour en valider les concepts proposes, la permeabilite du renfort fibreux ainsi que la viscosite de la resine ont du etre caracterisees. A l'aide de ces donnees, une serie de simulations de l'ecoulement pendant l'injection de la piece ont ete realisees et une approximation du temps de remplissage calculee. Apres cette etape, la conception du moule de tuyere a ete entamee et appuyee par une simulation mecanique de la resistance aux conditions de fabrication. Egalement, plusieurs outillages necessaires pour la fabrication ont ete concus et installes au nouveau laboratoire CGD (composites grandes dimensions). En parallele, plusieurs etudes ont ete effectuees pour comprendre les phenomenes influencant la polymerisation de la resine.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Newman, M C; McCloskey, J T; Tatara, C P
1998-01-01
Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900
A Checklist for Successful Quantitative Live Cell Imaging in Systems Biology
Sung, Myong-Hee
2013-01-01
Mathematical modeling of signaling and gene regulatory networks has provided unique insights about systems behaviors for many cell biological problems of medical importance. Quantitative single cell monitoring has a crucial role in advancing systems modeling of molecular networks. However, due to the multidisciplinary techniques that are necessary for adaptation of such systems biology approaches, dissemination to a wide research community has been relatively slow. In this essay, I focus on some technical aspects that are often under-appreciated, yet critical in harnessing live cell imaging methods to achieve single-cell-level understanding and quantitative modeling of molecular networks. The importance of these technical considerations will be elaborated with examples of successes and shortcomings. Future efforts will benefit by avoiding some pitfalls and by utilizing the lessons collectively learned from recent applications of imaging in systems biology. PMID:24709701
Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy
ERIC Educational Resources Information Center
Smith, Rachel; Cantrell, Kevin
2007-01-01
Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
Study On The Application Of CBERS-02B To Quantitative Soil Erosion Monitoring
NASA Astrophysics Data System (ADS)
Shi, Mingchang; Xu, Jing; Wang, Lei; Wang, Xiaoyun; Mu, Jing
2010-10-01
Currently, the reduction of soil erosion is an important prerequisite for achieving ecological security. Since real-time and quantitative evaluation on regional soil erosion plays a significant role in reducing the soil erosion, soil erosion models are more and more widely used. Based on RUSLE model, this paper carries out the quantitative soil erosion monitoring in the Xi River Basin and its surrounding areas by using CBERS-02B CCD, DEM, TRMM and other data. Besides, it performs the validation for monitoring results by using remote sensing investigation results in 2005. The monitoring results show that in 2009, the total amount of soil erosion in the study area was 1.94×106t, the erosion area was 2055.2km2 (54.06% of the total area), and the average soil erosion modulus was 509.7t km-2 a-1. As a case using CBERS-02B data for quantitative soil erosion monitoring, this study provides experience on the application of CBERS-02B data in the field of quantitative soil erosion monitoring and also for local soil erosion management.
NASA Astrophysics Data System (ADS)
Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.
2015-12-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
A cascading failure model for analyzing railway accident causation
NASA Astrophysics Data System (ADS)
Liu, Jin-Tao; Li, Ke-Ping
2018-01-01
In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
Rapid Quantitative Determination of Squalene in Shark Liver Oils by Raman and IR Spectroscopy.
Hall, David W; Marshall, Susan N; Gordon, Keith C; Killeen, Daniel P
2016-01-01
Squalene is sourced predominantly from shark liver oils and to a lesser extent from plants such as olives. It is used for the production of surfactants, dyes, sunscreen, and cosmetics. The economic value of shark liver oil is directly related to the squalene content, which in turn is highly variable and species-dependent. Presented here is a validated gas chromatography-mass spectrometry analysis method for the quantitation of squalene in shark liver oils, with an accuracy of 99.0 %, precision of 0.23 % (standard deviation), and linearity of >0.999. The method has been used to measure the squalene concentration of 16 commercial shark liver oils. These reference squalene concentrations were related to infrared (IR) and Raman spectra of the same oils using partial least squares regression. The resultant models were suitable for the rapid quantitation of squalene in shark liver oils, with cross-validation r (2) values of >0.98 and root mean square errors of validation of ≤4.3 % w/w. Independent test set validation of these models found mean absolute deviations of the 4.9 and 1.0 % w/w for the IR and Raman models, respectively. Both techniques were more accurate than results obtained by an industrial refractive index analysis method, which is used for rapid, cheap quantitation of squalene in shark liver oils. In particular, the Raman partial least squares regression was suited to quantitative squalene analysis. The intense and highly characteristic Raman bands of squalene made quantitative analysis possible irrespective of the lipid matrix.
NASA Astrophysics Data System (ADS)
Laborda, Eduardo; Wang, Yijun; Henstridge, Martin C.; Martínez-Ortiz, Francisco; Molina, Angela; Compton, Richard G.
2011-08-01
The Marcus-Hush and Butler-Volmer kinetic electrode models are compared experimentally by studying the reduction of 2-methyl-2-nitropropane in acetonitrile at mercury microelectrodes using Reverse Scan Square Wave Voltammetry. This technique is found to be very sensitive to the electrode kinetics and to permit critical comparison of the two models. The Butler-Volmer model satisfactorily fits the experimental data whereas Marcus-Hush does not quantitatively describe this redox system.
Chen, Y; Mao, J; Lin, J; Yu, H; Peters, S; Shebley, M
2016-01-01
This subteam under the Drug Metabolism Leadership Group (Innovation and Quality Consortium) investigated the quantitative role of circulating inhibitory metabolites in drug–drug interactions using physiologically based pharmacokinetic (PBPK) modeling. Three drugs with major circulating inhibitory metabolites (amiodarone, gemfibrozil, and sertraline) were systematically evaluated in addition to the literature review of recent examples. The application of PBPK modeling in drug interactions by inhibitory parent–metabolite pairs is described and guidance on strategic application is provided. PMID:27642087
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuldna, Piret, E-mail: piret.kuldna@seit.ee; Peterson, Kaja; Kuhi-Thalfeldt, Reeli
Strategic Environmental Assessment (SEA) serves as a platform for bringing together researchers, policy developers and other stakeholders to evaluate and communicate significant environmental and socio-economic effects of policies, plans and programmes. Quantitative computer models can facilitate knowledge exchange between various parties that strive to use scientific findings to guide policy-making decisions. The process of facilitating knowledge generation and exchange, i.e. knowledge brokerage, has been increasingly explored, but there is not much evidence in the literature on how knowledge brokerage activities are used in full cycles of SEAs which employ quantitative models. We report on the SEA process of the nationalmore » energy plan with reflections on where and how the Long-range Energy Alternatives Planning (LEAP) model was used for knowledge brokerage on emissions modelling between researchers and policy developers. Our main suggestion is that applying a quantitative model not only in ex ante, but also ex post scenario modelling and associated impact assessment can facilitate systematic and inspiring knowledge exchange process on a policy problem and capacity building of participating actors. - Highlights: • We examine the knowledge brokering on emissions modelling between researchers and policy developers in a full cycle of SEA. • Knowledge exchange process can evolve at any modelling stage within SEA. • Ex post scenario modelling enables systematic knowledge exchange and learning on a policy problem.« less
How quantitative measures unravel design principles in multi-stage phosphorylation cascades.
Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf
2008-09-07
We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.
Quantitative Reappraisal of the Helmholtz-Guyton Resonance Theory of Frequency Tuning in the Cochlea
Babbs, Charles F.
2011-01-01
To explore the fundamental biomechanics of sound frequency transduction in the cochlea, a two-dimensional analytical model of the basilar membrane was constructed from first principles. Quantitative analysis showed that axial forces along the membrane are negligible, condensing the problem to a set of ordered one-dimensional models in the radial dimension, for which all parameters can be specified from experimental data. Solutions of the radial models for asymmetrical boundary conditions produce realistic deformation patterns. The resulting second-order differential equations, based on the original concepts of Helmholtz and Guyton, and including viscoelastic restoring forces, predict a frequency map and amplitudes of deflections that are consistent with classical observations. They also predict the effects of an observation hole drilled in the surrounding bone, the effects of curvature of the cochlear spiral, as well as apparent traveling waves under a variety of experimental conditions. A quantitative rendition of the classical Helmholtz-Guyton model captures the essence of cochlear mechanics and unifies the competing resonance and traveling wave theories. PMID:22028708
NASA Astrophysics Data System (ADS)
Ragno, Rino; Ballante, Flavio; Pirolli, Adele; Wickersham, Richard B.; Patsilinakos, Alexandros; Hesse, Stéphanie; Perspicace, Enrico; Kirsch, Gilbert
2015-08-01
Vascular endothelial growth factor receptor-2, (VEGFR-2), is a key element in angiogenesis, the process by which new blood vessels are formed, and is thus an important pharmaceutical target. Here, 3-D quantitative structure-activity relationship (3-D QSAR) were used to build a quantitative screening and pharmacophore model of the VEGFR-2 receptors for design of inhibitors with improved activities. Most of available experimental data information has been used as training set to derive optimized and fully cross-validated eight mono-probe and a multi-probe quantitative models. Notable is the use of 262 molecules, aligned following both structure-based and ligand-based protocols, as external test set confirming the 3-D QSAR models' predictive capability and their usefulness in design new VEGFR-2 inhibitors. From a survey on literature, this is the first generation of a wide-ranging computational medicinal chemistry application on VEGFR2 inhibitors.
Modeling aeolian dune and dune field evolution
NASA Astrophysics Data System (ADS)
Diniega, Serina
Aeolian sand dune morphologies and sizes are strongly connected to the environmental context and physical processes active since dune formation. As such, the patterns and measurable features found within dunes and dune fields can be interpreted as records of environmental conditions. Using mathematical models of dune and dune field evolution, it should be possible to quantitatively predict dune field dynamics from current conditions or to determine past field conditions based on present-day observations. In this dissertation, we focus on the construction and quantitative analysis of a continuum dune evolution model. We then apply this model towards interpretation of the formative history of terrestrial and martian dunes and dune fields. Our first aim is to identify the controls for the characteristic lengthscales seen in patterned dune fields. Variations in sand flux, binary dune interactions, and topography are evaluated with respect to evolution of individual dunes. Through the use of both quantitative and qualitative multiscale models, these results are then extended to determine the role such processes may play in (de)stabilization of the dune field. We find that sand flux variations and topography generally destabilize dune fields, while dune collisions can yield more similarly-sized dunes. We construct and apply a phenomenological macroscale dune evolution model to then quantitatively demonstrate how dune collisions cause a dune field to evolve into a set of uniformly-sized dunes. Our second goal is to investigate the influence of reversing winds and polar processes in relation to dune slope and morphology. Using numerical experiments, we investigate possible causes of distinctive morphologies seen in Antarctic and martian polar dunes. Finally, we discuss possible model extensions and needed observations that will enable the inclusion of more realistic physical environments in the dune and dune field evolution models. By elucidating the qualitative and quantitative connections between environmental conditions, physical processes, and resultant dune and dune field morphologies, this research furthers our ability to interpret spacecraft images of dune fields, and to use present-day observations to improve our understanding of past terrestrial and martian environments.
Quantitative interpretation of Great Lakes remote sensing data
NASA Technical Reports Server (NTRS)
Shook, D. F.; Salzman, J.; Svehla, R. A.; Gedney, R. T.
1980-01-01
The paper discusses the quantitative interpretation of Great Lakes remote sensing water quality data. Remote sensing using color information must take into account (1) the existence of many different organic and inorganic species throughout the Great Lakes, (2) the occurrence of a mixture of species in most locations, and (3) spatial variations in types and concentration of species. The radiative transfer model provides a potential method for an orderly analysis of remote sensing data and a physical basis for developing quantitative algorithms. Predictions and field measurements of volume reflectances are presented which show the advantage of using a radiative transfer model. Spectral absorptance and backscattering coefficients for two inorganic sediments are reported.
Genetics and child psychiatry: I Advances in quantitative and molecular genetics.
Rutter, M; Silberg, J; O'Connor, T; Simonoff, E
1999-01-01
Advances in quantitative psychiatric genetics as a whole are reviewed with respect to conceptual and methodological issues in relation to statistical model fitting, new genetic designs, twin and adoptee studies, definition of the phenotype, pervasiveness of genetic influences, pervasiveness of environmental influences, shared and nonshared environmental effects, and nature-nurture interplay. Advances in molecular genetics are discussed in relation to the shifts in research strategies to investigate multifactorial disorders (affected relative linkage designs, association strategies, and quantitative trait loci studies); new techniques and identified genetic mechanisms (expansion of trinucleotide repeats, genomic imprinting, mitochondrial DNA, fluorescent in-situ hybridisation, behavioural phenotypes, and animal models); and the successful localisation of genes.
Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David
2003-12-31
This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.
Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David
2003-01-01
This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
NASA Astrophysics Data System (ADS)
Ito, Reika; Yoshidome, Takashi
2018-01-01
Markov state models (MSMs) are a powerful approach for analyzing the long-time behaviors of protein motion using molecular dynamics simulation data. However, their quantitative performance with respect to the physical quantities is poor. We believe that this poor performance is caused by the failure to appropriately classify protein conformations into states when constructing MSMs. Herein, we show that the quantitative performance of an order parameter is improved when a manifold-learning technique is employed for the classification in the MSM. The MSM construction using the K-center method, which has been previously used for classification, has a poor quantitative performance.
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
Caballero, Julio; Fernández, Michael; Coll, Deysma
2010-12-01
Three-dimensional quantitative structure-activity relationship studies were carried out on a series of 28 organosulphur compounds as 15-lipoxygenase inhibitors using comparative molecular field analysis and comparative molecular similarity indices analysis. Quantitative information on structure-activity relationships is provided for further rational development and direction of selective synthesis. All models were carried out over a training set including 22 compounds. The best comparative molecular field analysis model only included steric field and had a good Q² = 0.789. Comparative molecular similarity indices analysis overcame the comparative molecular field analysis results: the best comparative molecular similarity indices analysis model also only included steric field and had a Q² = 0.894. In addition, this model predicted adequately the compounds contained in the test set. Furthermore, plots of steric comparative molecular similarity indices analysis field allowed conclusions to be drawn for the choice of suitable inhibitors. In this sense, our model should prove useful in future 15-lipoxygenase inhibitor design studies. © 2010 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng
2018-04-01
The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.
How to make predictions about future infectious disease risks
Woolhouse, Mark
2011-01-01
Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924
Chen, Ran; Riviere, Jim E
2017-01-01
Quantitative analysis of the interactions between nanomaterials and their surrounding environment is crucial for safety evaluation in the application of nanotechnology as well as its development and standardization. In this chapter, we demonstrate the importance of the adsorption of surrounding molecules onto the surface of nanomaterials by forming biocorona and thus impact the bio-identity and fate of those materials. We illustrate the key factors including various physical forces in determining the interaction happening at bio-nano interfaces. We further discuss the mathematical endeavors in explaining and predicting the adsorption phenomena, and propose a new statistics-based surface adsorption model, the Biological Surface Adsorption Index (BSAI), to quantitatively analyze the interaction profile of surface adsorption of a large group of small organic molecules onto nanomaterials with varying surface physicochemical properties, first employing five descriptors representing the surface energy profile of the nanomaterials, then further incorporating traditional semi-empirical adsorption models to address concentration effects of solutes. These Advancements in surface adsorption modelling showed a promising development in the application of quantitative predictive models in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.
ERIC Educational Resources Information Center
Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith
2002-01-01
Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)
Quantitative Evaluation of a Planetary Renderer for Terrain Relative Navigation
NASA Astrophysics Data System (ADS)
Amoroso, E.; Jones, H.; Otten, N.; Wettergreen, D.; Whittaker, W.
2016-11-01
A ray-tracing computer renderer tool is presented based on LOLA and LROC elevation models and is quantitatively compared to LRO WAC and NAC images for photometric accuracy. We investigated using rendered images for terrain relative navigation.
Modeling the Learner in Computer-Assisted Instruction
ERIC Educational Resources Information Center
Fletcher, J. D.
1975-01-01
This paper briefly reviews relevant work in four areas: 1) quantitative models of memory; 2) regression models of performance; 3) automation models of performance; and 4) artificial intelligence. (Author/HB)
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
NASA Astrophysics Data System (ADS)
Voloshin, A. E.; Prostomolotov, A. I.; Verezub, N. A.
2016-11-01
The paper deals with the analysis of the accuracy of some one-dimensional (1D) analytical models of the axial distribution of impurities in the crystal grown from a melt. The models proposed by Burton-Prim-Slichter, Ostrogorsky-Muller and Garandet with co-authors are considered, these models are compared to the results of a two-dimensional (2D) numerical simulation. Stationary solutions as well as solutions for the initial transient regime obtained using these models are considered. The sources of errors are analyzed, a conclusion is made about the applicability of 1D analytical models for quantitative estimates of impurity incorporation into the crystal sample as well as for the solution of the inverse problems.
NASA Astrophysics Data System (ADS)
Qi, Pan; Shao, Wenbin; Liao, Shusheng
2016-02-01
For quantitative defects detection research on heat transfer tube in nuclear power plants (NPP), two parts of work are carried out based on the crack as the main research objects. (1) Production optimization of calibration tube. Firstly, ASME, RSEM and homemade crack calibration tubes are applied to quantitatively analyze the defects depth on other designed crack test tubes, and then the judgment with quantitative results under crack calibration tube with more accuracy is given. Base on that, weight analysis of influence factors for crack depth quantitative test such as crack orientation, length, volume and so on can be undertaken, which will optimize manufacture technology of calibration tubes. (2) Quantitative optimization of crack depth. Neural network model with multi-calibration curve adopted to optimize natural crack test depth generated in in-service tubes shows preliminary ability to improve quantitative accuracy.
Guide on the Effective Block Approach for the Fatigue Life Assessment of Metallic Structures
2013-01-01
Load Interpretation Truncation Validation coupon test program NDI Non-Destructive Inspection QF Quantitative Fractography RAAF Royal Australian...even more-so with the advent of quantitative fractography . 3 LEFM forms the basis of most state-of-art CG models. UNCLASSIFIED 1 UNCLASSIFIED DSTO...preferred method for obtaining the CGR data is by quantitative fractography (QF). This method is well suited to small cracks where other measurement
Reverse engineering systems models of regulation: discovery, prediction and mechanisms.
Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S
2012-08-01
Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Computer simulation of the metastatic progression.
Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo
2014-01-01
A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.
NASA Astrophysics Data System (ADS)
Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French
2007-03-01
A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.
NASA Astrophysics Data System (ADS)
Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong
2018-01-01
Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Taradolsirithitikul, Panchita; Sirisomboon, Panmanas; Dachoupakan Sirisomboon, Cheewanun
2017-03-01
Ochratoxin A (OTA) contamination is highly prevalent in a variety of agricultural products including the commercially important coffee bean. As such, rapid and accurate detection methods are considered necessary for the identification of OTA in green coffee beans. The goal of this research was to apply Fourier transform near infrared spectroscopy to detect and classify OTA contamination in green coffee beans in both a quantitative and qualitative manner. PLSR models were generated using pretreated spectroscopic data to predict the OTA concentration. The best model displayed a correlation coefficient (r) of 0.814, a standard error of prediction (SEP and bias of 1.965 µg kg -1 and 0.358 µg kg -1 , respectively. Additionally, a PLS-DA model was also generated, displaying a classification accuracy of 96.83% for a non-OTA contaminated model and 80.95% for an OTA contaminated model, with an overall classification accuracy of 88.89%. The results demonstrate that the developed model could be used for detecting OTA contamination in green coffee beans in either a quantitative or qualitative manner. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Naik, P K; Singh, T; Singh, H
2009-07-01
Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.
Kaddi, Chanchala D; Niesner, Bradley; Baek, Rena; Jasper, Paul; Pappas, John; Tolsma, John; Li, Jing; van Rijn, Zachary; Tao, Mengdi; Ortemann-Renon, Catherine; Easton, Rachael; Tan, Sharon; Puga, Ana Cristina; Schuchman, Edward H; Barrett, Jeffrey S; Azer, Karim
2018-06-19
Acid sphingomyelinase deficiency (ASMD) is a rare lysosomal storage disorder with heterogeneous clinical manifestations, including hepatosplenomegaly and infiltrative pulmonary disease, and is associated with significant morbidity and mortality. Olipudase alfa (recombinant human acid sphingomyelinase) is an enzyme replacement therapy under development for the non-neurological manifestations of ASMD. We present a quantitative systems pharmacology (QSP) model supporting the clinical development of olipudase alfa. The model is multiscale and mechanistic, linking the enzymatic deficiency driving the disease to molecular-level, cellular-level, and organ-level effects. Model development was informed by natural history, and preclinical and clinical studies. By considering patient-specific pharmacokinetic (PK) profiles and indicators of disease severity, the model describes pharmacodynamic (PD) and clinical end points for individual patients. The ASMD QSP model provides a platform for quantitatively assessing systemic pharmacological effects in adult and pediatric patients, and explaining variability within and across these patient populations, thereby supporting the extrapolation of treatment response from adults to pediatrics. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Highlights from High Energy Neutrino Experiments at CERN
NASA Astrophysics Data System (ADS)
Schlatter, W.-D.
2015-07-01
Experiments with high energy neutrino beams at CERN provided early quantitative tests of the Standard Model. This article describes results from studies of the nucleon quark structure and of the weak current, together with the precise measurement of the weak mixing angle. These results have established a new quality for tests of the electroweak model. In addition, the measurements of the nucleon structure functions in deep inelastic neutrino scattering allowed first quantitative tests of QCD.
A Quantitative Model of Expert Transcription Typing
1993-03-08
side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire
Monte Carlo modeling of light-tissue interactions in narrow band imaging.
Le, Du V N; Wang, Quanzeng; Ramella-Roman, Jessica C; Pfefer, T Joshua
2013-01-01
Light-tissue interactions that influence vascular contrast enhancement in narrow band imaging (NBI) have not been the subject of extensive theoretical study. In order to elucidate relevant mechanisms in a systematic and quantitative manner we have developed and validated a Monte Carlo model of NBI and used it to study the effect of device and tissue parameters, specifically, imaging wavelength (415 versus 540 nm) and vessel diameter and depth. Simulations provided quantitative predictions of contrast-including up to 125% improvement in small, superficial vessel contrast for 415 over 540 nm. Our findings indicated that absorption rather than scattering-the mechanism often cited in prior studies-was the dominant factor behind spectral variations in vessel depth-selectivity. Narrow-band images of a tissue-simulating phantom showed good agreement in terms of trends and quantitative values. Numerical modeling represents a powerful tool for elucidating the factors that affect the performance of spectral imaging approaches such as NBI.
Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.
Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia
2013-10-02
Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.
Parkin, Jason R; Beaujean, A Alexander
2012-02-01
This study used structural equation modeling to examine the effect of Stratum III (i.e., general intelligence) and Stratum II (i.e., Comprehension-Knowledge, Fluid Reasoning, Short-Term Memory, Processing Speed, and Visual Processing) factors of the Cattell-Horn-Carroll (CHC) cognitive abilities, as operationalized by the Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV; Wechsler, 2003a) subtests, on Quantitative Knowledge, as operationalized by the Wechsler Individual Achievement Test, Second Edition (WIAT-II; Wechsler, 2002) subtests. Participants came from the WISC-IV/WIAT-II linking sample (n=550). We compared models that predicted Quantitative Knowledge using only Stratum III factors, only Stratum II factors, and both Stratum III and Stratum II factors. Results indicated that the model with only the Stratum III factor predicting Quantitative Knowledge best fit the data. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Getting quantitative about consequences of cross-ecosystem resource subsidies on recipient consumers
Richardson, John S.; Wipfli, Mark S.
2016-01-01
Most studies of cross-ecosystem resource subsidies have demonstrated positive effects on recipient consumer populations, often with very large effect sizes. However, it is important to move beyond these initial addition–exclusion experiments to consider the quantitative consequences for populations across gradients in the rates and quality of resource inputs. In our introduction to this special issue, we describe at least four potential models that describe functional relationships between subsidy input rates and consumer responses, most of them asymptotic. Here we aim to advance our quantitative understanding of how subsidy inputs influence recipient consumers and their communities. In the papers following, fish were either the recipient consumers or the subsidy as carcasses of anadromous species. Advancing general, predictive models will enable us to further consider what other factors are potentially co-limiting (e.g., nutrients, other population interactions, physical habitat, etc.) and better integrate resource subsidies into consumer–resource, biophysical dynamics models.
Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model
NASA Astrophysics Data System (ADS)
Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi
2017-09-01
Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.
Modeling conflict : research methods, quantitative modeling, and lessons learned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.
2004-09-01
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less
A quantitative visual dashboard to explore exposures to consumer product ingredients
The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “pop...
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.
2011-01-01
Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212
Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo
2011-06-01
Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.
Health impact assessment – A survey on quantifying tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehr, Rainer, E-mail: rainer.fehr@uni-bielefeld.de; Mekel, Odile C.L., E-mail: odile.mekel@lzg.nrw.de; Fintan Hurley, J., E-mail: fintan.hurley@iom-world.org
Integrating human health into prospective impact assessments is known to be challenging. This is true for both approaches: dedicated health impact assessments (HIA) as well as inclusion of health into more general impact assessments. Acknowledging the full range of participatory, qualitative, and quantitative approaches, this study focuses on the latter, especially on computational tools for quantitative health modelling. We conducted a survey among tool developers concerning the status quo of development and availability of such tools; experiences made with model usage in real-life situations; and priorities for further development. Responding toolmaker groups described 17 such tools, most of them beingmore » maintained and reported as ready for use and covering a wide range of topics, including risk & protective factors, exposures, policies, and health outcomes. In recent years, existing models have been improved and were applied in new ways, and completely new models emerged. There was high agreement among respondents on the need to further develop methods for assessment of inequalities and uncertainty. The contribution of quantitative modeling to health foresight would benefit from building joint strategies of further tool development, improving the visibility of quantitative tools and methods, and engaging continuously with actual and potential users. - Highlights: • A survey investigated computational tools for health impact quantification. • Formal evaluation of such tools has been rare. • Handling inequalities and uncertainties are priority areas for further development. • Health foresight would benefit from tool developers and users forming a community. • Joint development strategies across computational tools are needed.« less
Wu, Z J; Xu, B; Jiang, H; Zheng, M; Zhang, M; Zhao, W J; Cheng, J
2016-08-20
Objective: To investigate the application of United States Environmental Protection Agency (EPA) inhalation risk assessment model, Singapore semi-quantitative risk assessment model, and occupational hazards risk assessment index method in occupational health risk in enterprises using dimethylformamide (DMF) in a certain area in Jiangsu, China, and to put forward related risk control measures. Methods: The industries involving DMF exposure in Jiangsu province were chosen as the evaluation objects in 2013 and three risk assessment models were used in the evaluation. EPA inhalation risk assessment model: HQ=EC/RfC; Singapore semi-quantitative risk assessment model: Risk= (HR×ER) 1/2 ; Occupational hazards risk assessment index=2 Health effect level ×2 exposure ratio ×Operation condition level. Results: The results of hazard quotient (HQ>1) from EPA inhalation risk assessment model suggested that all the workshops (dry method, wet method and printing) and work positions (pasting, burdening, unreeling, rolling, assisting) were high risk. The results of Singapore semi-quantitative risk assessment model indicated that the workshop risk level of dry method, wet method and printing were 3.5 (high) , 3.5 (high) and 2.8 (general) , and position risk level of pasting, burdening, unreeling, rolling, assisting were 4 (high) , 4 (high) , 2.8 (general) , 2.8 (general) and 2.8 (general) . The results of occupational hazards risk assessment index method demonstrated that the position risk index of pasting, burdening, unreeling, rolling, assisting were 42 (high) , 33 (high) , 23 (middle) , 21 (middle) and 22 (middle) . The results of Singapore semi-quantitative risk assessment model and occupational hazards risk assessment index method were similar, while EPA inhalation risk assessment model indicated all the workshops and positions were high risk. Conclusion: The occupational hazards risk assessment index method fully considers health effects, exposure, and operating conditions and can comprehensively and accurately evaluate occupational health risk caused by DMF.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
A Quantitative Approach to Assessing System Evolvability
NASA Technical Reports Server (NTRS)
Christian, John A., III
2004-01-01
When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.
Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A
2012-03-01
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
NASA Astrophysics Data System (ADS)
Schwarz, W.; Schwub, S.; Quering, K.; Wiedmann, D.; Höppel, H. W.; Göken, M.
2011-09-01
During their operational life-time, actively cooled liners of cryogenic combustion chambers are known to exhibit a characteristic so-called doghouse deformation, pursued by formation of axial cracks. The present work aims at developing a model that quantitatively accounts for this failure mechanism. High-temperature material behaviour is characterised in a test programme and it is shown that stress relaxation, strain rate dependence, isotropic and kinematic hardening as well as material ageing have to be taken into account in the model formulation. From fracture surface analyses of a thrust chamber it is concluded that the failure mode of the hot wall ligament at the tip of the doghouse is related to ductile rupture. A material model is proposed that captures all stated effects. Basing on the concept of continuum damage mechanics, the model is further extended to incorporate softening effects due to material degradation. The model is assessed on experimental data and quantitative agreement is established for all tests available. A 3D finite element thermo-mechanical analysis is performed on a representative thrust chamber applying the developed material-damage model. The simulation successfully captures the observed accrued thinning of the hot wall and quantitatively reproduces the doghouse deformation.
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
NASA Astrophysics Data System (ADS)
Lykkegaard, Eva; Ulriksen, Lars
2016-03-01
During the past 30 years, Eccles' comprehensive social-psychological Expectancy-Value Model of Motivated Behavioural Choices (EV-MBC model) has been proven suitable for studying educational choices related to Science, Technology, Engineering and/or Mathematics (STEM). The reflections of 15 students in their last year in upper-secondary school concerning their choice of tertiary education were examined using quantitative EV-MBC surveys and repeated qualitative interviews. This article presents the analyses of three cases in detail. The analytical focus was whether the factors indicated in the EV-MBC model could be used to detect significant changes in the students' educational choice processes. An important finding was that the quantitative EV-MBC surveys and the qualitative interviews gave quite different results concerning the students' considerations about the choice of tertiary education, and that significant changes in the students' reflections were not captured by the factors of the EV-MBC model. This questions the validity of the EV-MBC surveys. Moreover, the quantitative factors from the EV-MBC model did not sufficiently explain students' dynamical educational choice processes where students in parallel considered several different potential educational trajectories. We therefore call for further studies of the EV-MBC model's use in describing longitudinal choice processes and especially in investigating significant changes.
Energy Storage Publications | Transportation Research | NREL
. 367, 1 November 2017 pp. 214-215. Quantitative Microstructure Characterization of a NMC Electrode . NREL/PR-5400-68759. Quantitative Microstructure Characterization of a NMC Electrode Presentation Source . NREL/PR-5400-68339. Microstructure Characterization and Modeling for Improved Electrode Design
Modelling default and likelihood reasoning as probabilistic reasoning
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Torres-Mejía, Gabriela; De Stavola, Bianca; Allen, Diane S; Pérez-Gavilán, Juan J; Ferreira, Jorge M; Fentiman, Ian S; Dos Santos Silva, Isabel
2005-05-01
Mammographic features are known to be associated with breast cancer but the magnitude of the effect differs markedly from study to study. Methods to assess mammographic features range from subjective qualitative classifications to computer-automated quantitative measures. We used data from the UK Guernsey prospective studies to examine the relative value of these methods in predicting breast cancer risk. In all, 3,211 women ages > or =35 years who had a mammogram taken in 1986 to 1989 were followed-up to the end of October 2003, with 111 developing breast cancer during this period. Mammograms were classified using the subjective qualitative Wolfe classification and several quantitative mammographic features measured using computer-based techniques. Breast cancer risk was positively associated with high-grade Wolfe classification, percent breast density and area of dense tissue, and negatively associated with area of lucent tissue, fractal dimension, and lacunarity. Inclusion of the quantitative measures in the same model identified area of dense tissue and lacunarity as the best predictors of breast cancer, with risk increasing by 59% [95% confidence interval (95% CI), 29-94%] per SD increase in total area of dense tissue but declining by 39% (95% CI, 53-22%) per SD increase in lacunarity, after adjusting for each other and for other confounders. Comparison of models that included both the qualitative Wolfe classification and these two quantitative measures to models that included either the qualitative or the two quantitative variables showed that they all made significant contributions to prediction of breast cancer risk. These findings indicate that breast cancer risk is affected not only by the amount of mammographic density but also by the degree of heterogeneity of the parenchymal pattern and, presumably, by other features captured by the Wolfe classification.
Effect of quantum nuclear motion on hydrogen bonding
NASA Astrophysics Data System (ADS)
McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.
2014-05-01
This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.
Surface plasmon resonance microscopy: achieving a quantitative optical response
Peterson, Alexander W.; Halter, Michael; Plant, Anne L.; Elliott, John T.
2016-01-01
Surface plasmon resonance (SPR) imaging allows real-time label-free imaging based on index of refraction, and changes in index of refraction at an interface. Optical parameter analysis is achieved by application of the Fresnel model to SPR data typically taken by an instrument in a prism based configuration. We carry out SPR imaging on a microscope by launching light into a sample, and collecting reflected light through a high numerical aperture microscope objective. The SPR microscope enables spatial resolution that approaches the diffraction limit, and has a dynamic range that allows detection of subnanometer to submicrometer changes in thickness of biological material at a surface. However, unambiguous quantitative interpretation of SPR changes using the microscope system could not be achieved using the Fresnel model because of polarization dependent attenuation and optical aberration that occurs in the high numerical aperture objective. To overcome this problem, we demonstrate a model to correct for polarization diattenuation and optical aberrations in the SPR data, and develop a procedure to calibrate reflectivity to index of refraction values. The calibration and correction strategy for quantitative analysis was validated by comparing the known indices of refraction of bulk materials with corrected SPR data interpreted with the Fresnel model. Subsequently, we applied our SPR microscopy method to evaluate the index of refraction for a series of polymer microspheres in aqueous media and validated the quality of the measurement with quantitative phase microscopy. PMID:27782542
Quantitative metal magnetic memory reliability modeling for welded joints
NASA Astrophysics Data System (ADS)
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun
2017-01-01
Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Engelberg, Jesse A.; Giberson, Richard T.; Young, Lawrence J.T.; Hubbard, Neil E.
2014-01-01
Microwave methods of fixation can dramatically shorten fixation times while preserving tissue structure; however, it remains unclear if adequate tissue antigenicity is preserved. To assess and validate antigenicity, robust quantitative methods and animal disease models are needed. We used two mouse mammary models of human breast cancer to evaluate microwave-assisted and standard 24-hr formalin fixation. The mouse models expressed four antigens prognostic for breast cancer outcome: estrogen receptor, progesterone receptor, Ki67, and human epidermal growth factor receptor 2. Using pathologist evaluation and novel methods of quantitative image analysis, we measured and compared the quality of antigen preservation, percentage of positive cells, and line plots of cell intensity. Visual evaluations by pathologists established that the amounts and patterns of staining were similar in tissues fixed by the different methods. The results of the quantitative image analysis provided a fine-grained evaluation, demonstrating that tissue antigenicity is preserved in tissues fixed using microwave methods. Evaluation of the results demonstrated that a 1-hr, 150-W fixation is better than a 45-min, 150-W fixation followed by a 15-min, 650-W fixation. The results demonstrated that microwave-assisted formalin fixation can standardize fixation times to 1 hr and produce immunohistochemistry that is in every way commensurate with longer conventional fixation methods. PMID:24682322
NASA Astrophysics Data System (ADS)
Neubert, M.; Jurisch, M.
2015-06-01
The paper analyzes experimental compositional profiles in Vertical Bridgman (VB, VGF) grown (Cd,Zn)Te crystals, found in the literature. The origin of the observed axial ZnTe-distribution profiles is attributed to dendritic growth after initial nucleation from supercooled melts. The analysis was done by utilizing a boundary layer model providing a very good approximation of the experimental data. Besides the discussion of the qualitative results also a quantitative analysis of the fitted model parameters is presented as far as it is possible by the utilized model.
Comfort and Accessibility Evaluation of Light Rail Vehicles
NASA Astrophysics Data System (ADS)
Hirasawa, Takayuki; Matsuoka, Shigeki; Suda, Yoshihiro
A quantitative evaluation method for passenger rooms of light rail vehicles from viewpoint of comfort and accessibility is proposed as the result of physical modeling of in-vehicle behavior of passengers upon Gibson's ecological psychology approach. The model parameters are identified from experiments at real vehicles at the depot of Kumamoto municipal transport and at the full-scale mockup of the University of Tokyo. The developed model has realized quantitative evaluation of floor lowering effects by abolishing internal steps at passenger doorways and door usage restriction scenarios from viewpoint of both passengers and operators in comparison to commuter railway vehicles.
Mingguang, Zhang; Juncheng, Jiang
2008-10-30
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.
Dixon, Steven L; Duan, Jianxin; Smith, Ethan; Von Bargen, Christopher D; Sherman, Woody; Repasky, Matthew P
2016-10-01
We introduce AutoQSAR, an automated machine-learning application to build, validate and deploy quantitative structure-activity relationship (QSAR) models. The process of descriptor generation, feature selection and the creation of a large number of QSAR models has been automated into a single workflow within AutoQSAR. The models are built using a variety of machine-learning methods, and each model is scored using a novel approach. Effectiveness of the method is demonstrated through comparison with literature QSAR models using identical datasets for six end points: protein-ligand binding affinity, solubility, blood-brain barrier permeability, carcinogenicity, mutagenicity and bioaccumulation in fish. AutoQSAR demonstrates similar or better predictive performance as compared with published results for four of the six endpoints while requiring minimal human time and expertise.
Colored Petri net modeling and simulation of signal transduction pathways.
Lee, Dong-Yup; Zimmer, Ralf; Lee, Sang Yup; Park, Sunwon
2006-03-01
Presented herein is a methodology for quantitatively analyzing the complex signaling network by resorting to colored Petri nets (CPN). The mathematical as well as Petri net models for two basic reaction types were established, followed by the extension to a large signal transduction system stimulated by epidermal growth factor (EGF) in an application study. The CPN models based on the Petri net representation and the conservation and kinetic equations were used to examine the dynamic behavior of the EGF signaling pathway. The usefulness of Petri nets is demonstrated for the quantitative analysis of the signal transduction pathway. Moreover, the trade-offs between modeling capability and simulation efficiency of this pathway are explored, suggesting that the Petri net model can be invaluable in the initial stage of building a dynamic model.
Paquette, Philippe; El Khamlichi, Youssef; Lamontagne, Martin; Higgins, Johanne; Gagnon, Dany H
2017-08-01
Quantitative ultrasound imaging is gaining popularity in research and clinical settings to measure the neuromechanical properties of the peripheral nerves such as their capability to glide in response to body segment movement. Increasing evidence suggests that impaired median nerve longitudinal excursion is associated with carpal tunnel syndrome. To date, psychometric properties of longitudinal nerve excursion measurements using quantitative ultrasound imaging have not been extensively investigated. This study investigates the convergent validity of the longitudinal nerve excursion by comparing measures obtained using quantitative ultrasound imaging with those determined with a motion analysis system. A 38-cm long rigid nerve-phantom model was used to assess the longitudinal excursion in a laboratory environment. The nerve-phantom model, immersed in a 20-cm deep container filled with a gelatin-based solution, was moved 20 times using a linear forward and backward motion. Three light-emitting diodes were used to record nerve-phantom excursion with a motion analysis system, while a 5-cm linear transducer allowed simultaneous recording via ultrasound imaging. Both measurement techniques yielded excellent association ( r = 0.99) and agreement (mean absolute difference between methods = 0.85 mm; mean relative difference between methods = 7.48 %). Small discrepancies were largely found when larger excursions (i.e. > 10 mm) were performed, revealing slight underestimation of the excursion by the ultrasound imaging analysis software. Quantitative ultrasound imaging is an accurate method to assess the longitudinal excursion of an in vitro nerve-phantom model and appears relevant for future research protocols investigating the neuromechanical properties of the peripheral nerves.
Quantitative study of FORC diagrams in thermally corrected Stoner- Wohlfarth nanoparticles systems
NASA Astrophysics Data System (ADS)
De Biasi, E.; Curiale, J.; Zysler, R. D.
2016-12-01
The use of FORC diagrams is becoming increasingly popular among researchers devoted to magnetism and magnetic materials. However, a thorough interpretation of this kind of diagrams, in order to achieve quantitative information, requires an appropriate model of the studied system. For that reason most of the FORC studies are used for a qualitative analysis. In magnetic systems thermal fluctuations "blur" the signatures of the anisotropy, volume and particle interactions distributions, therefore thermal effects in nanoparticles systems conspire against a proper interpretation and analysis of these diagrams. Motivated by this fact, we have quantitatively studied the degree of accuracy of the information extracted from FORC diagrams for the special case of single-domain thermal corrected Stoner- Wohlfarth (easy axes along the external field orientation) nanoparticles systems. In this work, the starting point is an analytical model that describes the behavior of a magnetic nanoparticles system as a function of field, anisotropy, temperature and measurement time. In order to study the quantitative degree of accuracy of our model, we built FORC diagrams for different archetypical cases of magnetic nanoparticles. Our results show that from the quantitative information obtained from the diagrams, under the hypotheses of the proposed model, is possible to recover the features of the original system with accuracy above 95%. This accuracy is improved at low temperatures and also it is possible to access to the anisotropy distribution directly from the FORC coercive field profile. Indeed, our simulations predict that the volume distribution plays a secondary role being the mean value and its deviation the only important parameters. Therefore it is possible to obtain an accurate result for the inversion and interaction fields despite the features of the volume distribution.
NASA Astrophysics Data System (ADS)
Freuchet, Florian
Dans le milieu marin, l'abondance du recrutement depend des processus qui vont affecter les adultes et le stock de larves. Sous l'influence de signaux fiables de la qualite de l'habitat, la mere peut augmenter (effet maternel anticipatoire, 'anticipatory mother effects', AME) ou reduire (effet maternel egoiste, 'selfish maternai effects', SME) la condition physiologique de la progeniture. Dans les zones tropicales, generalement plus oligotrophes, la ressource nutritive et la temperature sont deux composantes importantes pouvant limiter le recrutement. Les effets de l'apport nutritionnel et du stress thermique sur la production de larves et sur la stategie maternelle adoptee ont ete testes dans cette etude. Nous avons cible la balane Chthamalus bisinuatus (Pilsbry) comme modele biologique car el1e domine les zones intertidales superieures le long des cotes rocheuses du Sud-Est du Bresil (region tropicale). Les hypotheses de depart stipulaient que l'apport nutritionnel permet aux adultes de produire des larves de qualite elevee et que le stress thermique genere une ponte precoce, produisant des larves de faible qualite. Afin de tester ces hypotheses, des populations de C. bisinuatus ont ete elevees selon quatre groupes experimentaux differents, en combinant des niveaux d'apport nutritionnel (eleve et faible) et de stress thermique (stresse et non stresse). Des mesures de survie et de conditions physiologiques des adultes et des larves ont permis d'identifier les reponses parentales pouvant etre avantageuses dans un environnement tropical hostile. L'analyse des profils en acides gras a ete la methode utilisee pour evaluer la qualite physiologique des adultes et de larves. Les resultats du traitement alimentaire (fort ou faible apport nutritif), ne montrent aucune difference dans l'accumulation de lipides neutres, la taille des nauplii, l'effort de reproduction ou le temps de survie des nauplii en condition de jeune. Il semble que la faible ressource nutritive est compensee par les meres qui adoptent un modele AME qui se traduit par l'anticipation du milieu par les meres afin de produire des larves au phenotype approprie. A l'ajout d'un stress thermique, on observe des diminutions de 47% de la production de larves et celles-ci etaient 18 microm plus petites. Les meres semblent utiliser un modele SME caracterise par une diminution de la performance des larves. Suite a ces resultats, nous emettons l'hypothese qu'en zone subtropicale, comme sur les cotes de l'etat de Sao Paulo, l'elevation de la temperature subie par les balanes n'est, a priori, pas dommageable pour leur organisme si eIle est combinee a un apport nutritif suffisant.
Quantitative stem cell biology: the threat and the glory.
Pollard, Steven M
2016-11-15
Major technological innovations over the past decade have transformed our ability to extract quantitative data from biological systems at an unprecedented scale and resolution. These quantitative methods and associated large datasets should lead to an exciting new phase of discovery across many areas of biology. However, there is a clear threat: will we drown in these rivers of data? On 18th July 2016, stem cell biologists gathered in Cambridge for the 5th annual Cambridge Stem Cell Symposium to discuss 'Quantitative stem cell biology: from molecules to models'. This Meeting Review provides a summary of the data presented by each speaker, with a focus on quantitative techniques and the new biological insights that are emerging. © 2016. Published by The Company of Biologists Ltd.
Gupta, Shikha; Basant, Nikita; Rai, Premanjali; Singh, Kunwar P
2015-11-01
Binding affinity of chemical to carbon is an important characteristic as it finds vast industrial applications. Experimental determination of the adsorption capacity of diverse chemicals onto carbon is both time and resource intensive, and development of computational approaches has widely been advocated. In this study, artificial intelligence (AI)-based ten different qualitative and quantitative structure-property relationship (QSPR) models (MLPN, RBFN, PNN/GRNN, CCN, SVM, GEP, GMDH, SDT, DTF, DTB) were established for the prediction of the adsorption capacity of structurally diverse chemicals to activated carbon following the OECD guidelines. Structural diversity of the chemicals and nonlinear dependence in the data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation procedures performed employing a wide series of statistical checks. In complete dataset, the qualitative models rendered classification accuracies between 97.04 and 99.93%, while the quantitative models yielded correlation (R(2)) values of 0.877-0.977 between the measured and the predicted endpoint values. The quantitative prediction accuracies for the higher molecular weight (MW) compounds (class 4) were relatively better than those for the low MW compounds. Both in the qualitative and quantitative models, the Polarizability was the most influential descriptor. Structural alerts responsible for the extreme adsorption behavior of the compounds were identified. Higher number of carbon and presence of higher halogens in a molecule rendered higher binding affinity. Proposed QSPR models performed well and outperformed the previous reports. A relatively better performance of the ensemble learning models (DTF, DTB) may be attributed to the strengths of the bagging and boosting algorithms which enhance the predictive accuracies. The proposed AI models can be useful tools in screening the chemicals for their binding affinities toward carbon for their safe management.
Lipiäinen, Tiina; Pessi, Jenni; Movahedi, Parisa; Koivistoinen, Juha; Kurki, Lauri; Tenhunen, Mari; Yliruusi, Jouko; Juppo, Anne M; Heikkonen, Jukka; Pahikkala, Tapio; Strachan, Clare J
2018-04-03
Raman spectroscopy is widely used for quantitative pharmaceutical analysis, but a common obstacle to its use is sample fluorescence masking the Raman signal. Time-gating provides an instrument-based method for rejecting fluorescence through temporal resolution of the spectral signal and allows Raman spectra of fluorescent materials to be obtained. An additional practical advantage is that analysis is possible in ambient lighting. This study assesses the efficacy of time-gated Raman spectroscopy for the quantitative measurement of fluorescent pharmaceuticals. Time-gated Raman spectroscopy with a 128 × (2) × 4 CMOS SPAD detector was applied for quantitative analysis of ternary mixtures of solid-state forms of the model drug, piroxicam (PRX). Partial least-squares (PLS) regression allowed quantification, with Raman-active time domain selection (based on visual inspection) improving performance. Model performance was further improved by using kernel-based regularized least-squares (RLS) regression with greedy feature selection in which the data use in both the Raman shift and time dimensions was statistically optimized. Overall, time-gated Raman spectroscopy, especially with optimized data analysis in both the spectral and time dimensions, shows potential for sensitive and relatively routine quantitative analysis of photoluminescent pharmaceuticals during drug development and manufacturing.
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya
2010-04-01
Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.
Behavioral Assembly Required: Particularly for Quantitative Courses
ERIC Educational Resources Information Center
Mazen, Abdelmagid
2008-01-01
This article integrates behavioral approaches into the teaching and learning of quantitative subjects with application to statistics. Focusing on the emotional component of learning, the article presents a system dynamic model that provides descriptive and prescriptive accounts of learners' anxiety. Metaphors and the metaphorizing process are…
Standardized methods are often used to assess the likelihood of a human-health effect from exposure to a specified hazard, and inform opinions and decisions about risk management and communication. A Quantitative Microbial Risk Assessment (QMRA) is specifically adapted to detail ...
Quantitative Prediction of Systemic Toxicity Points of Departure (OpenTox USA 2017)
Human health risk assessment associated with environmental chemical exposure is limited by the tens of thousands of chemicals little or no experimental in vivo toxicity data. Data gap filling techniques, such as quantitative models based on chemical structure information, are c...
A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.
Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R
2018-01-01
There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.
Mendlinger, Sheryl; Cwikel, Julie
2008-02-01
A double helix spiral model is presented which demonstrates how to combine qualitative and quantitative methods of inquiry in an interactive fashion over time. Using findings on women's health behaviors (e.g., menstruation, breast-feeding, coping strategies), we show how qualitative and quantitative methods highlight the theory of knowledge acquisition in women's health decisions. A rich data set of 48 semistructured, in-depth ethnographic interviews with mother-daughter dyads from six ethnic groups (Israeli, European, North African, Former Soviet Union [FSU], American/Canadian, and Ethiopian), plus seven focus groups, provided the qualitative sources for analysis. This data set formed the basis of research questions used in a quantitative telephone survey of 302 Israeli women from the ages of 25 to 42 from four ethnic groups. We employed multiple cycles of data analysis from both data sets to produce a more detailed and multidimensional picture of women's health behavior decisions through a spiraling process.
A quantitative framework for the forward design of synthetic miRNA circuits.
Bloom, Ryan J; Winkler, Sally M; Smolke, Christina D
2014-11-01
Synthetic genetic circuits incorporating regulatory components based on RNA interference (RNAi) have been used in a variety of systems. A comprehensive understanding of the parameters that determine the relationship between microRNA (miRNA) and target expression levels is lacking. We describe a quantitative framework supporting the forward engineering of gene circuits that incorporate RNAi-based regulatory components in mammalian cells. We developed a model that captures the quantitative relationship between miRNA and target gene expression levels as a function of parameters, including mRNA half-life and miRNA target-site number. We extended the model to synthetic circuits that incorporate protein-responsive miRNA switches and designed an optimized miRNA-based protein concentration detector circuit that noninvasively measures small changes in the nuclear concentration of β-catenin owing to induction of the Wnt signaling pathway. Our results highlight the importance of methods for guiding the quantitative design of genetic circuits to achieve robust, reliable and predictable behaviors in mammalian cells.
A conductive grating sensor for online quantitative monitoring of fatigue crack.
Li, Peiyuan; Cheng, Li; Yan, Xiaojun; Jiao, Shengbo; Li, Yakun
2018-05-01
Online quantitative monitoring of crack damage due to fatigue is a critical challenge for structural health monitoring systems assessing structural safety. To achieve online quantitative monitoring of fatigue crack, a novel conductive grating sensor based on the principle of electrical potential difference is proposed. The sensor consists of equidistant grating channels to monitor the fatigue crack length and conductive bars to provide the circuit path. An online crack monitoring system is established to verify the sensor's capability. The experimental results prove that the sensor is suitable for online quantitative monitoring of fatigue crack. A finite element model for the sensor is also developed to optimize the sensitivity of crack monitoring, which is defined by the rate of sensor resistance change caused by the break of the first grating channel. Analysis of the model shows that the sensor sensitivity can be enhanced by reducing the number of grating channels and increasing their resistance and reducing the resistance of the conductive bar.
A conductive grating sensor for online quantitative monitoring of fatigue crack
NASA Astrophysics Data System (ADS)
Li, Peiyuan; Cheng, Li; Yan, Xiaojun; Jiao, Shengbo; Li, Yakun
2018-05-01
Online quantitative monitoring of crack damage due to fatigue is a critical challenge for structural health monitoring systems assessing structural safety. To achieve online quantitative monitoring of fatigue crack, a novel conductive grating sensor based on the principle of electrical potential difference is proposed. The sensor consists of equidistant grating channels to monitor the fatigue crack length and conductive bars to provide the circuit path. An online crack monitoring system is established to verify the sensor's capability. The experimental results prove that the sensor is suitable for online quantitative monitoring of fatigue crack. A finite element model for the sensor is also developed to optimize the sensitivity of crack monitoring, which is defined by the rate of sensor resistance change caused by the break of the first grating channel. Analysis of the model shows that the sensor sensitivity can be enhanced by reducing the number of grating channels and increasing their resistance and reducing the resistance of the conductive bar.
Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow
NASA Technical Reports Server (NTRS)
Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.
1999-01-01
The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.
Zhang, Yu-Tian; Xiao, Mei-Feng; Deng, Kai-Wen; Yang, Yan-Tao; Zhou, Yi-Qun; Zhou, Jin; He, Fu-Yuan; Liu, Wen-Long
2018-06-01
Nowadays, to research and formulate an efficiency extraction system for Chinese herbal medicine, scientists have always been facing a great challenge for quality management, so that the transitivity of Q-markers in quantitative analysis of TCM was proposed by Prof. Liu recently. In order to improve the quality of extraction from raw medicinal materials for clinical preparations, a series of integrated mathematic models for transitivity of Q-markers in quantitative analysis of TCM were established. Buyanghuanwu decoction (BYHWD) was a commonly TCMs prescription, which was used to prevent and treat the ischemic heart and brain diseases. In this paper, we selected BYHWD as an extraction experimental subject to study the quantitative transitivity of TCM. Based on theory of Fick's Rule and Noyes-Whitney equation, novel kinetic models were established for extraction of active components. Meanwhile, fitting out kinetic equations of extracted models and then calculating the inherent parameters in material piece and Q-marker quantitative transfer coefficients, which were considered as indexes to evaluate transitivity of Q-markers in quantitative analysis of the extraction process of BYHWD. HPLC was applied to screen and analyze the potential Q-markers in the extraction process. Fick's Rule and Noyes-Whitney equation were adopted for mathematically modeling extraction process. Kinetic parameters were fitted and calculated by the Statistical Program for Social Sciences 20.0 software. The transferable efficiency was described and evaluated by potential Q-markers transfer trajectory via transitivity availability AUC, extraction ratio P, and decomposition ratio D respectively. The Q-marker was identified with AUC, P, D. Astragaloside IV, laetrile, paeoniflorin, and ferulic acid were studied as potential Q-markers from BYHWD. The relative technologic parameters were presented by mathematic models, which could adequately illustrate the inherent properties of raw materials preparation and affection of Q-markers transitivity in equilibrium processing. AUC, P, D for potential Q-markers of AST-IV, laetrile, paeoniflorin, and FA were obtained, with the results of 289.9 mAu s, 46.24%, 22.35%; 1730 mAu s, 84.48%, 1.963%; 5600 mAu s, 70.22%, 0.4752%; 7810 mAu s, 24.29%, 4.235%, respectively. The results showed that the suitable Q-markers were laetrile and paeoniflorin in our study, which exhibited acceptable traceability and transitivity in the extraction process of TCMs. Therefore, these novel mathematic models might be developed as a new standard to control TCMs quality process from raw medicinal materials to product manufacturing. Copyright © 2018 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luria, Paolo; Aspinall, Peter A
2003-08-01
The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based onmore » a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)« less
Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He
2017-07-01
This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.
Pucher, Katharina K; Candel, Math J J M; Krumeich, Anja; Boot, Nicole M W M; De Vries, Nanne K
2015-07-05
We report on the longitudinal quantitative and qualitative data resulting from a two-year trajectory (2008-2011) based on the DIagnosis of Sustainable Collaboration (DISC) model. This trajectory aimed to support regional coordinators of comprehensive school health promotion (CSHP) in systematically developing change management and project management to establish intersectoral collaboration. Multilevel analyses of quantitative data on the determinants of collaborations according to the DISC model were done, with 90 respondents (response 57 %) at pretest and 69 respondents (52 %) at posttest. Nvivo analyses of the qualitative data collected during the trajectory included minutes of monthly/bimonthly personal/telephone interviews (N = 65) with regional coordinators, and documents they produced about their activities. Quantitative data showed major improvements in change management and project management. There were also improvements in consensus development, commitment formation, formalization of the CSHP, and alignment of policies, although organizational problems within the collaboration increased. Content analyses of qualitative data identified five main management styles, including (1) facilitating active involvement of relevant parties; (2) informing collaborating parties; (3) controlling and (4) supporting their task accomplishment; and (5) coordinating the collaborative processes. We have contributed to the fundamental understanding of the development of intersectoral collaboration by combining qualitative and quantitative data. Our results support a systematic approach to intersectoral collaboration using the DISC model. They also suggest five main management styles to improve intersectoral collaboration in the initial stage. The outcomes are useful for health professionals involved in similar ventures.
Qualitative research and the epidemiological imagination: a vital relationship.
Popay, J
2003-01-01
This paper takes as its starting point the assumption that the 'Epidemiological Imagination' has a central role to play in the future development of policies and practice to improve population health and reduce health inequalities within and between states but suggests that by neglecting the contribution that qualitative research can make epidemiology is failing to deliver this potential. The paper briefly considers what qualitative research is, touching on epistemological questions--what type of "knowledge" is generated--and questions of methods--what approaches to data collection, analysis and interpretation are involved). Following this the paper presents two different models of the relationship between qualitative and quantitative research. The enhancement model (which assumes that qualitative research findings add something extra to the findings of quantitative research) suggests three related "roles" for qualitative research: generating hypothesis to be tested by quantitative research, helping to construct more sophisticated measures of social phenomena and explaining unexpected research from quantitative research. In contrast, the Epistemological Model suggests that qualitative research is equal but different from quantitative research making a unique contribution through: researching parts other research approaches can't reach, increasing understanding by adding conceptual and theoretical depth to knowledge, shifting the balance of power between researchers and researched and challenging traditional epidemiological ways of "knowing" the social world. The paper illustrates these different types of contributions with examples of qualitative research and finally discusses ways in which the "trustworthiness" of qualitative research can be assessed.
Preschool Teachers' Views about Classroom Management Models
ERIC Educational Resources Information Center
Sahin-Sak, Ikbal Tuba; Sak, Ramazan; Tezel-Sahin, Fatma
2018-01-01
This survey-based quantitative study investigates 310 Turkish preschool teachers' views about classroom management, using the following six models of disciplinary strategy: behavioral change theory, Dreikurs' social discipline model, Canter's assertive discipline model, the Glasser model of discipline, Kounin's model, and Gordon's teacher…
Caracterisation experimentale et numerique de la flamme de carburants synthetiques gazeux
NASA Astrophysics Data System (ADS)
Ouimette, Pascale
The goal of this research is to characterize experimentally and numerically laminar flames of syngas fuels made of hydrogen (H2), carbon monoxide (CO), and carbon dioxide (CO2). More specifically, the secondary objectives are: 1) to understand the effects of CO2 concentration and H2/CO ratio on NOx emissions, flame temperature, visible flame height, and flame appearance; 2) to analyze the influence of H2/CO ratio on the lame structure, and; 3) to compare and validate different H2/CO kinetic mechanisms used in a CFD (computational fluid dynamics) model over different H2/CO ratios. Thus, the present thesis is divided in three chapters, each one corresponding to a secondary objective. For the first part, experimentations enabled to conclude that adding CO2 diminishes flame temperature and EINOx for all equivalence ratios while increasing the H2/CO ratio has no influence on flame temperature but increases EINOx for equivalence ratios lower than 2. Concerning flame appearance, a low CO2 concentration in the fuel or a high H2/CO ratio gives the flame an orange color, which is explained by a high level of CO in the combustion by-products. The observed constant flame temperature with the addition of CO, which has a higher adiabatic flame temperature, is mainly due to the increased heat loss through radiation by CO2. Because NOx emissions of H2/CO/CO 2 flames are mainly a function of flame temperature, which is a function of the H2/CO ratio, the rest of the thesis concentrates on measuring and predicting species in the flame as a good prediction of species and heat release will enable to predict NOx emissions. Thus, for the second part, different H2/CO fuels are tested and major species are measured by Raman spectroscopy. Concerning major species, the maximal measured H 2O concentration decreases with addition of CO to the fuel, while the central CO2 concentration increases, as expected. However, at 20% of the visible flame height and for all fuels tested herein, the measured CO2 concentration is lower than its stoechiometric value while the measured H2O already reached its stoechiometric concentration. The slow chemical reactions necessary to produce CO2 compared to the ones forming H2O could explain this difference. For the third part, a numerical model is created for a partially premixed flame of 50% H 2 / 50% CO. This model compares different combustion mechanisms and shows that a reduced kinetic mechanism reduces simulation times while conserving the results quality of more complex kinetic schemes. This numerical model, which includes radiation heat losses, is also validated for a large range of fuels going from 100% H2 to 5% H2 / 95% CO. The most important recommendation of this work is to include a NOx mechanism to the numerical model in order to eventually determine an optimal fuel. It would also be necessary to validate the model over a wide range for different parameters such as equivalence ratio, initial temperature and initial pressure.
Predicting the activity of drugs for a group of imidazopyridine anticoccidial compounds.
Si, Hongzong; Lian, Ning; Yuan, Shuping; Fu, Aiping; Duan, Yun-Bo; Zhang, Kejun; Yao, Xiaojun
2009-10-01
Gene expression programming (GEP) is a novel machine learning technique. The GEP is used to build nonlinear quantitative structure-activity relationship model for the prediction of the IC(50) for the imidazopyridine anticoccidial compounds. This model is based on descriptors which are calculated from the molecular structure. Four descriptors are selected from the descriptors' pool by heuristic method (HM) to build multivariable linear model. The GEP method produced a nonlinear quantitative model with a correlation coefficient and a mean error of 0.96 and 0.24 for the training set, 0.91 and 0.52 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones.
Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S
2015-04-01
The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
NASA Astrophysics Data System (ADS)
Ecoffet, Robert; Maget, Vincent; Rolland, Guy; Lorfevre, Eric; Bourdarie, Sébastien; Boscher, Daniel
2016-07-01
We have developed a series of instruments for energetic particle measurements, associated with component test beds "MEX". The aim of this program is to check and improve space radiation engineering models and techniques. The first series of instruments, "ICARE" has flown on the MIR space station (SPICA mission), the ISS (SPICA-S mission) and the SAC-C low Earth polar orbiting satellite (ICARE mission 2001-2011) in cooperation with the Argentinian space agency CONAE. A second series of instruments "ICARE-NG" was and is flown as: - CARMEN-1 mission on CONAE's SAC-D, 650 km, 98°, 2011-2015, along with three "SODAD" space micro-debris detectors - CARMEN-2 mission on the JASON-2 satellite (CNES, JPL, EUMETSAT, NOAA), 1336 km, 66°, 2008-now, along with JAXA's LPT energetic particle detector - CARMEN-3 mission on the JASON-3 satellite in the same orbit as JASON-2, launched 17 January 2016, along with a plasma detector "AMBRE", and JAXA's LPT again. The ICARE-NG is spectrometer composed of a set of three fully depleted silicon solid state detectors used in single and coincident mode. The on-board measurements consist in accumulating energy loss spectra in the detectors over a programmable accumulation period. The spectra are generated through signal amplitude classification using 8 bit ADCs and resulting in 128/256 channels histograms. The discriminators reference levels, amplifier gain and accumulation time for the spectra are programmable to provide for possible on-board tuning optimization. Ground level calibrations have been made at ONERA-DESP using radioactive source emitting alpha particles in order to determine the exact correspondence between channel number and particle energy. To obtain the response functions to particles, a detailed sectoring analysis of the satellite associated with GEANT-4/MCNP-X calculations has been performed to characterize the geometrical factors of the each detector for p+ as well as for e- with different energies. The component test bed "MEX" is equipped with two different types of active dosimeters, P-MOS silicon dosimeters and OSL (optically stimulated luminescence). Those dosimeters provide independent measurements of ionizing and displacement damage doses and consolidate spectrometers' observations. The data sets obtained cover more than one solar cycle. Dynamics of the radiation belts, effects of solar particle events, coronal mass ejections and coronal holes were observed. Spectrometer measurements and dosimeter readings were used to evaluate current engineering models, and helped in developing improved ones, along with "space weather" radiation belt indices. The presentation will provide a comprehensive review of detector features and mission results.
NASA Astrophysics Data System (ADS)
Kamli, Emna
Les radars hautes-frequences (RHF) mesurent les courants marins de surface avec une portee pouvant atteindre 200 kilometres et une resolution de l'ordre du kilometre. Cette etude a pour but de caracteriser la performance des RHF, en terme de couverture spatiale, pour la mesure des courants de surface en presence partielle de glace de mer. Pour ce faire, les mesures des courants de deux radars de type CODAR sur la rive sud de l'estuaire maritime du Saint-Laurent, et d'un radar de type WERA sur la rive nord, prises pendant l'hiver 2013, ont ete utilisees. Dans un premier temps, l'aire moyenne journaliere de la zone ou les courants sont mesures par chaque radar a ete comparee a l'energie des vagues de Bragg calculee a partir des donnees brutes d'acceleration fournies par une bouee mouillee dans la zone couverte par les radars. La couverture des CODARs est dependante de la densite d'energie de Bragg, alors que la couverture du WERA y est pratiquement insensible. Un modele de fetch appele GENER a ete force par la vitesse du vent predite par le modele GEM d'Environnement Canada pour estimer la hauteur significative ainsi que la periode modale des vagues. A partir de ces parametres, la densite d'energie des vagues de Bragg a ete evaluee pendant l'hiver a l'aide du spectre theorique de Bretschneider. Ces resultats permettent d'etablir la couverture normale de chaque radar en absence de glace de mer. La concentration de glace de mer, predite par le systeme canadien operationnel de prevision glace-ocean, a ete moyennee sur les differents fetchs du vent selon la direction moyenne journaliere des vagues predites par GENER. Dans un deuxieme temps, la relation entre le ratio des couvertures journalieres obtenues pendant l'hiver 2013 et des couvertures normales de chaque radar d'une part, et la concentration moyenne journaliere de glace de mer d'autre part, a ete etablie. Le ratio des couvertures decroit avec l'augmentation de la concentration de glace de mer pour les deux types de radars, mais pour une concentration de glace de 20% la couverture du WERA est reduite de 34% alors que pour les CODARs elle est reduite de 67%. Les relations empiriques etablies entre la couverture des RHF et les parametres environnementaux (vent et glace de mer) permettront de predire la couverture que pourraient fournir des RHF installes dans d'autres regions soumises a la presence saisonniere de glace de mer.
Caracterisation des occupations du sol en milieu urbain par imagerie radar
NASA Astrophysics Data System (ADS)
Codjia, Claude
This study aims to test the relevance of medium and high-resolution SAR images on the characterization of the types of land use in urban areas. To this end, we have relied on textural approaches based on second-order statistics. Specifically, we look for texture parameters most relevant for discriminating urban objects. We have used in this regard Radarsat-1 in fine polarization mode and Radarsat-2 HH fine mode in dual and quad polarization and ultrafine mode HH polarization. The land uses sought were dense building, medium density building, low density building, industrial and institutional buildings, low density vegetation, dense vegetation and water. We have identified nine texture parameters for analysis, grouped into families according to their mathematical definitions in a first step. The parameters of similarity / dissimilarity include Homogeneity, Contrast, the Differential Inverse Moment and Dissimilarity. The parameters of disorder are Entropy and the Second Angular Momentum. The Standard Deviation and Correlation are the dispersion parameters and the Average is a separate family. It is clear from experience that certain combinations of texture parameters from different family used in classifications yield good results while others produce kappa of very little interest. Furthermore, we realize that if the use of several texture parameters improves classifications, its performance ceils from three parameters. The calculation of correlations between the textures and their principal axes confirm the results. Despite the good performance of this approach based on the complementarity of texture parameters, systematic errors due to the cardinal effects remain on classifications. To overcome this problem, a radiometric compensation model was developed based on the radar cross section (SER). A radar simulation from the digital surface model of the environment allowed us to extract the building backscatter zones and to analyze the related backscatter. Thus, we were able to devise a strategy of compensation of cardinal effects solely based on the responses of the objects according to their orientation from the plane of illumination through the radar's beam. It appeared that a compensation algorithm based on the radar cross section was appropriate. Some examples of the application of this algorithm on HH polarized RADARSAT-2 images are presented as well. Application of this algorithm will allow considerable gains with regard to certain forms of automation (classification and segmentation) at the level of radar imagery thus generating a higher level of quality in regard to visual interpretation. Application of this algorithm on RADARSAT-1 and RADARSAT-2 images with HH, HV, VH, and VV polarisations helped make considerable gains and eliminate most of the classification errors due to the cardinal effects.
Flow assignment model for quantitative analysis of diverting bulk freight from road to railway
Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian
2017-01-01
Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536
Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S
2011-01-06
The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model.
Neuroergonomics: Quantitative Modeling of Individual, Shared, and Team Neurodynamic Information.
Stevens, Ronald H; Galloway, Trysha L; Willemsen-Dunlap, Ann
2018-06-01
The aim of this study was to use the same quantitative measure and scale to directly compare the neurodynamic information/organizations of individual team members with those of the team. Team processes are difficult to separate from those of individual team members due to the lack of quantitative measures that can be applied to both process sets. Second-by-second symbolic representations were created of each team member's electroencephalographic power, and quantitative estimates of their neurodynamic organizations were calculated from the Shannon entropy of the symbolic data streams. The information in the neurodynamic data streams of health care ( n = 24), submarine navigation ( n = 12), and high school problem-solving ( n = 13) dyads was separated into the information of each team member, the information shared by team members, and the overall team information. Most of the team information was the sum of each individual's neurodynamic information. The remaining team information was shared among the team members. This shared information averaged ~15% of the individual information, with momentary levels of 1% to 80%. Continuous quantitative estimates can be made from the shared, individual, and team neurodynamic information about the contributions of different team members to the overall neurodynamic organization of a team and the neurodynamic interdependencies among the team members. Information models provide a generalizable quantitative method for separating a team's neurodynamic organization into that of individual team members and that shared among team members.
Simon, Ted W; Simons, S Stoney; Preston, R Julian; Boobis, Alan R; Cohen, Samuel M; Doerrer, Nancy G; Fenner-Crisp, Penelope A; McMullin, Tami S; McQueen, Charlene A; Rowlands, J Craig
2014-08-01
The HESI RISK21 project formed the Dose-Response/Mode-of-Action Subteam to develop strategies for using all available data (in vitro, in vivo, and in silico) to advance the next-generation of chemical risk assessments. A goal of the Subteam is to enhance the existing Mode of Action/Human Relevance Framework and Key Events/Dose Response Framework (KEDRF) to make the best use of quantitative dose-response and timing information for Key Events (KEs). The resulting Quantitative Key Events/Dose-Response Framework (Q-KEDRF) provides a structured quantitative approach for systematic examination of the dose-response and timing of KEs resulting from a dose of a bioactive agent that causes a potential adverse outcome. Two concepts are described as aids to increasing the understanding of mode of action-Associative Events and Modulating Factors. These concepts are illustrated in two case studies; 1) cholinesterase inhibition by the pesticide chlorpyrifos, which illustrates the necessity of considering quantitative dose-response information when assessing the effect of a Modulating Factor, that is, enzyme polymorphisms in humans, and 2) estrogen-induced uterotrophic responses in rodents, which demonstrate how quantitative dose-response modeling for KE, the understanding of temporal relationships between KEs and a counterfactual examination of hypothesized KEs can determine whether they are Associative Events or true KEs.
Hakalahti, Minna; Faustini, Marco; Boissière, Cédric; Kontturi, Eero; Tammelin, Tekla
2017-09-11
Humidity is an efficient instrument for facilitating changes in local architectures of two-dimensional surfaces assembled from nanoscaled biomaterials. Here, complementary surface-sensitive methods are used to collect explicit and precise experimental evidence on the water vapor sorption into (2,2,6,6-tetramethylpiperidin-1-yl)oxyl (TEMPO) oxidized cellulose nanofibril (CNF) thin film over the relative humidity (RH) range from 0 to 97%. Changes in thickness and mass of the film due to water vapor uptake are tracked using spectroscopic ellipsometry and quartz crystal microbalance with dissipation monitoring, respectively. Experimental data is evaluated by the quantitative Langmuir/Flory-Huggins/clustering model and the Brunauer-Emmett-Teller model. The isotherms coupled with the quantitative models unveil distinct regions of predominant sorption modes: specific sorption of water molecules below 10% RH, multilayer build-up between 10 to 75% RH, and clustering of water molecules above 75% RH. The study reveals the sorption mechanisms underlying the well-known water uptake behavior of TEMPO oxidized CNF directly at the gas-solid interface.
de Gramatica, Martina; Massacci, Fabio; Shim, Woohyun; Turhan, Uğur; Williams, Julian
2017-02-01
We analyze the issue of agency costs in aviation security by combining results from a quantitative economic model with a qualitative study based on semi-structured interviews. Our model extends previous principal-agent models by combining the traditional fixed and varying monetary responses to physical and cognitive effort with nonmonetary welfare and potentially transferable value of employees' own human capital. To provide empirical evidence for the tradeoffs identified in the quantitative model, we have undertaken an extensive interview process with regulators, airport managers, security personnel, and those tasked with training security personnel from an airport operating in a relatively high-risk state, Turkey. Our results indicate that the effectiveness of additional training depends on the mix of "transferable skills" and "emotional" buy-in of the security agents. Principals need to identify on which side of a critical tipping point their agents are to ensure that additional training, with attached expectations of the burden of work, aligns the incentives of employees with the principals' own objectives. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Kishcha, P.; Alpert, P.; Shtivelman, A.; Krichak, S. O.; Joseph, J. H.; Kallos, G.; Katsafados, P.; Spyrou, C.; Gobbi, G. P.; Barnaba, F.; Nickovic, S.; PéRez, C.; Baldasano, J. M.
2007-08-01
In this study, forecast errors in dust vertical distributions were analyzed. This was carried out by using quantitative comparisons between dust vertical profiles retrieved from lidar measurements over Rome, Italy, performed from 2001 to 2003, and those predicted by models. Three models were used: the four-particle-size Dust Regional Atmospheric Model (DREAM), the older one-particle-size version of the SKIRON model from the University of Athens (UOA), and the pre-2006 one-particle-size Tel Aviv University (TAU) model. SKIRON and DREAM are initialized on a daily basis using the dust concentration from the previous forecast cycle, while the TAU model initialization is based on the Total Ozone Mapping Spectrometer aerosol index (TOMS AI). The quantitative comparison shows that (1) the use of four-particle-size bins in the dust modeling instead of only one-particle-size bins improves dust forecasts; (2) cloud presence could contribute to noticeable dust forecast errors in SKIRON and DREAM; and (3) as far as the TAU model is concerned, its forecast errors were mainly caused by technical problems with TOMS measurements from the Earth Probe satellite. As a result, dust forecast errors in the TAU model could be significant even under cloudless conditions. The DREAM versus lidar quantitative comparisons at different altitudes show that the model predictions are more accurate in the middle part of dust layers than in the top and bottom parts of dust layers.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
Lu, Yongtao; Engelke, Klaus; Glueer, Claus-C; Morlock, Michael M; Huber, Gerd
2014-11-01
Quantitative computed tomography-based finite element modeling technique is a promising clinical tool for the prediction of bone strength. However, quantitative computed tomography-based finite element models were created from image datasets with different image voxel sizes. The aim of this study was to investigate whether there is an influence of image voxel size on the finite element models. In all 12 thoracolumbar vertebrae were scanned prior to autopsy (in situ) using two different quantitative computed tomography scan protocols, which resulted in image datasets with two different voxel sizes (0.29 × 0.29 × 1.3 mm(3) vs 0.18 × 0.18 × 0.6 mm(3)). Eight of them were scanned after autopsy (in vitro) and the datasets were reconstructed with two voxel sizes (0.32 × 0.32 × 0.6 mm(3) vs. 0.18 × 0.18 × 0.3 mm(3)). Finite element models with cuboid volume of interest extracted from the vertebral cancellous part were created and inhomogeneous bilinear bone properties were defined. Axial compression was simulated. No effect of voxel size was detected on the apparent bone mineral density for both the in situ and in vitro cases. However, the apparent modulus and yield strength showed significant differences in the two voxel size group pairs (in situ and in vitro). In conclusion, the image voxel size may have to be considered when the finite element voxel modeling technique is used in clinical applications. © IMechE 2014.
Fang, Jiansong; Pang, Xiaocong; Wu, Ping; Yan, Rong; Gao, Li; Li, Chao; Lian, Wenwen; Wang, Qi; Liu, Ai-lin; Du, Guan-hua
2016-05-01
A dataset of 67 berberine derivatives for the inhibition of butyrylcholinesterase (BuChE) was studied based on the combination of quantitative structure-activity relationships models, molecular docking, and molecular dynamics methods. First, a series of berberine derivatives were reported, and their inhibitory activities toward butyrylcholinesterase (BuChE) were evaluated. By 2D- quantitative structure-activity relationships studies, the best model built by partial least-square had a conventional correlation coefficient of the training set (R(2)) of 0.883, a cross-validation correlation coefficient (Qcv2) of 0.777, and a conventional correlation coefficient of the test set (Rpred2) of 0.775. The model was also confirmed by Y-randomization examination. In addition, the molecular docking and molecular dynamics simulation were performed to better elucidate the inhibitory mechanism of three typical berberine derivatives (berberine, C2, and C55) toward BuChE. The predicted binding free energy results were consistent with the experimental data and showed that the van der Waals energy term (ΔEvdw) difference played the most important role in differentiating the activity among the three inhibitors (berberine, C2, and C55). The developed quantitative structure-activity relationships models provide details on the fine relationship linking structure and activity and offer clues for structural modifications, and the molecular simulation helps to understand the inhibitory mechanism of the three typical inhibitors. In conclusion, the results of this study provide useful clues for new drug design and discovery of BuChE inhibitors from berberine derivatives. © 2015 John Wiley & Sons A/S.
Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.
Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G
2018-06-01
Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mebarki, Fouzia
Cette etude vise a etudier la possibilite d'utiliser des materiaux composites a matrice thermoplastique pour des applications electriques comme les supports des systemes d'allumage dans les moteurs d'automobile. Nous nous interessons plus particulierement aux composites a base de polyethylene terephtalate (PET) recycle. Les isolants classiques comme le PET ne peuvent pas satisfaire toutes les exigences. L'introduction des renforts comme les fibres de verre et le mica peuvent ameliorer les caracteristiques mecaniques de ces materiaux. Toutefois, cette amelioration peut etre accompagnee d'une diminution des proprietes electriques surtout que ces materiaux doivent operer sous contraintes thermiques et electriques tres severes. Afin d'estimer la duree de vie de ces isolants, des essais de vieillissement accelere ont ete effectues a une frequence de 300Hz dans une plage de temperature allant de la temperature ambiante a 140°C. L'etude a haute temperature permettra de determiner la temperature de service des materiaux candidats. Des essais de la rupture dielectrique ont ete realises sur un grand nombre d'echantillon selon la norme ASTM D-149 relative aux mesures de rigidite dielectrique des isolants solides. Ces tests ont permis de deceler les echantillons problematiques et de verifier la qualite de ces isolants solides. Les differentes connaissances acquises lors de cette analyse ont servi a predire les performances des materiaux en service et vont permettre a la compagnie Groupe Lavergne d'apporter des ameliorations au niveau des formulations existantes et par la suite developper un materiau ayant les proprietes electriques et thermiques adequates pour ce type d'application. None None None None
NASA Astrophysics Data System (ADS)
Daran-Daneau, Cyril
In order to answer the energetic needs of the future, insulation, which is the central piece of high voltage equipment, has to be reinvented. Nanodielectrics seem to be the promise of a mayor technological breakthrough. Based on nanocomposites with a linear low density polyethylene matrix reinforced by nano-clays and manufactured from a commercial master batch, the present thesis aims to characterise the accuracy of measurement techniques applied on nanodielectrics and also the dielectric properties of these materials. Thus, dielectric spectroscopy accuracy both in frequency and time domain is analysed with a specific emphasis on the impact of gold sputtering of the samples and on the measurements transposition from time domain to frequency domain. Also, when measuring dielectric strength, the significant role of surrounding medium and sample thickness on the variation of the alpha scale factor is shown and analysed in relation with the presence of surface partial discharges. Taking into account these limits and for different nanoparticles composition, complex permittivity as a function of frequency, linearity and conductivity as a function of applied electric field is studied with respect to the role that seems to play nanometrics interfaces. Similarly, dielectric strength variation as a function of nano-clays content is investigated with respect to the partial discharge resistance improvement that seems be induced by nanoparticle addition. Finally, an opening towards nanostructuration of underground cables' insulation is proposed considering on one hand the dielectric characterisation of polyethylene matrix reinforced by nano-clays or nano-silica nanodielectrics and on the other hand a succinct cost analysis. Keywords: nanodielectric, linear low density polyethylene, nanoclays, dielectric spectroscopy, dielectric breakdown
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
Developmental toxicity is a relevant endpoint for the comprehensive assessment of human health risk from chemical exposure. However, animal developmental toxicity studies remain unavailable for many environmental contaminants due to the complexity and cost of these types of analy...
USDA-ARS?s Scientific Manuscript database
Standardized methods are often used to assess the likelihood of a human-health effect from exposure to a specified hazard, and inform opinions and decisions about risk management and communication. A Quantitative Microbial Risk Assessment (QMRA) is specifically adapted to detail potential human-heal...
Sunderland, John J; Christian, Paul E
2015-01-01
The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
QTest: Quantitative Testing of Theories of Binary Choice.
Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.
Cardiff, Robert D; Hubbard, Neil E; Engelberg, Jesse A; Munn, Robert J; Miller, Claramae H; Walls, Judith E; Chen, Jane Q; Velásquez-García, Héctor A; Galvez, Jose J; Bell, Katie J; Beckett, Laurel A; Li, Yue-Ju; Borowsky, Alexander D
2013-01-01
Quantitative Image Analysis (QIA) of digitized whole slide images for morphometric parameters and immunohistochemistry of breast cancer antigens was used to evaluate the technical reproducibility, biological variability, and intratumoral heterogeneity in three transplantable mouse mammary tumor models of human breast cancer. The relative preservation of structure and immunogenicity of the three mouse models and three human breast cancers was also compared when fixed with representatives of four distinct classes of fixatives. The three mouse mammary tumor cell models were an ER + /PR + model (SSM2), a Her2 + model (NDL), and a triple negative model (MET1). The four breast cancer antigens were ER, PR, Her2, and Ki67. The fixatives included examples of (1) strong cross-linkers, (2) weak cross-linkers, (3) coagulants, and (4) combination fixatives. Each parameter was quantitatively analyzed using modified Aperio Technologies ImageScope algorithms. Careful pre-analytical adjustments to the algorithms were required to provide accurate results. The QIA permitted rigorous statistical analysis of results and grading by rank order. The analyses suggested excellent technical reproducibility and confirmed biological heterogeneity within each tumor. The strong cross-linker fixatives, such as formalin, consistently ranked higher than weak cross-linker, coagulant and combination fixatives in both the morphometric and immunohistochemical parameters. PMID:23399853
A novel paradigm for cell and molecule interaction ontology: from the CMM model to IMGT-ONTOLOGY
2010-01-01
Background Biology is moving fast toward the virtuous circle of other disciplines: from data to quantitative modeling and back to data. Models are usually developed by mathematicians, physicists, and computer scientists to translate qualitative or semi-quantitative biological knowledge into a quantitative approach. To eliminate semantic confusion between biology and other disciplines, it is necessary to have a list of the most important and frequently used concepts coherently defined. Results We propose a novel paradigm for generating new concepts for an ontology, starting from model rather than developing a database. We apply that approach to generate concepts for cell and molecule interaction starting from an agent based model. This effort provides a solid infrastructure that is useful to overcome the semantic ambiguities that arise between biologists and mathematicians, physicists, and computer scientists, when they interact in a multidisciplinary field. Conclusions This effort represents the first attempt at linking molecule ontology with cell ontology, in IMGT-ONTOLOGY, the well established ontology in immunogenetics and immunoinformatics, and a paradigm for life science biology. With the increasing use of models in biology and medicine, the need to link different levels, from molecules to cells to tissues and organs, is increasingly important. PMID:20167082
Quantitative systems toxicology
Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.
2017-01-01
The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440
NASA Astrophysics Data System (ADS)
Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei
2018-05-01
Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
Propagating Qualitative Values Through Quantitative Equations
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
1992-01-01
In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.
Safe uses of Hill's model: an exact comparison with the Adair-Klotz model
2011-01-01
Background The Hill function and the related Hill model are used frequently to study processes in the living cell. There are very few studies investigating the situations in which the model can be safely used. For example, it has been shown, at the mean field level, that the dose response curve obtained from a Hill model agrees well with the dose response curves obtained from a more complicated Adair-Klotz model, provided that the parameters of the Adair-Klotz model describe strongly cooperative binding. However, it has not been established whether such findings can be extended to other properties and non-mean field (stochastic) versions of the same, or other, models. Results In this work a rather generic quantitative framework for approaching such a problem is suggested. The main idea is to focus on comparing the particle number distribution functions for Hill's and Adair-Klotz's models instead of investigating a particular property (e.g. the dose response curve). The approach is valid for any model that can be mathematically related to the Hill model. The Adair-Klotz model is used to illustrate the technique. One main and two auxiliary similarity measures were introduced to compare the distributions in a quantitative way. Both time dependent and the equilibrium properties of the similarity measures were studied. Conclusions A strongly cooperative Adair-Klotz model can be replaced by a suitable Hill model in such a way that any property computed from the two models, even the one describing stochastic features, is approximately the same. The quantitative analysis showed that boundaries of the regions in the parameter space where the models behave in the same way exhibit a rather rich structure. PMID:21521501
NASA Astrophysics Data System (ADS)
Wilson, J. P.; Fischer, W. W.
2010-12-01
Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative humidity (<20%) combined with elevated temperatures (>25°C) could cause sufficient cavitation to reduce hydraulic conductivity by 50%. This suggests that the Early Devonian environments that supported the earliest vascular plants were not subject to prolonged midseason droughts, or, alternatively, that the growing season was short. This places minimum constraints on water availability (e.g., groundwater hydration, relative humidity) in locations where Asteroxylon fossils are found; these environments must have had high relative humidities, comparable to tropical riparian environments. Given these constraints, biome-scale paleovegetation models that place early vascular plants distal to water sources can be revised to account for reduced drought tolerance. Paleoclimate proxies that treat early terrestrial plants as functionally interchangeable can incorporate physiological differences in a quantitatively meaningful way. Application of hydraulic models to fossil plants provides an additional perspective on the 475 million-year history of terrestrial photosynthetic environments and has potential to corroborate other plant-based paleoclimate proxies.
Elayavilli, Ravikumar Komandur; Liu, Hongfang
2016-01-01
Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.
Bayram, Jamil D; Zuabi, Shawki
2012-04-01
The interaction between the acute medical consequences of a Multiple Casualty Event (MCE) and the total medical capacity of the community affected determines if the event amounts to an acute medical disaster. There is a need for a comprehensive quantitative model in MCE that would account for both prehospital and hospital-based acute medical systems, leading to the quantification of acute medical disasters. Such a proposed model needs to be flexible enough in its application to accommodate a priori estimation as part of the decision-making process and a posteriori evaluation for total quality management purposes. The concept proposed by de Boer et al in 1989, along with the disaster metrics quantitative models proposed by Bayram et al on hospital surge capacity and prehospital medical response, were used as theoretical frameworks for a new comprehensive model, taking into account both prehospital and hospital systems, in order to quantify acute medical disasters. A quantitative model called the Acute Medical Severity Index (AMSI) was developed. AMSI is the proportion of the Acute Medical Burden (AMB) resulting from the event, compared to the Total Medical Capacity (TMC) of the community affected; AMSI = AMB/TMC. In this model, AMB is defined as the sum of critical (T1) and moderate (T2) casualties caused by the event, while TMC is a function of the Total Hospital Capacity (THC) and the medical rescue factor (R) accounting for the hospital-based and prehospital medical systems, respectively. Qualitatively, the authors define acute medical disaster as "a state after any type of Multiple Casualty Event where the Acute Medical Burden (AMB) exceeds the Total Medical Capacity (TMC) of the community affected." Quantitatively, an acute medical disaster has an AMSI value of more than one (AMB / TMC > 1). An acute medical incident has an AMSI value of less than one, without the need for medical surge. An acute medical emergency has an AMSI value of less than one with utilization of surge capacity (prehospital or hospital-based). An acute medical crisis has an AMSI value between 0.9 and 1, approaching the threshold for an actual medical disaster. A novel quantitative taxonomy in MCE has been proposed by modeling the Acute Medical Severity Index (AMSI). This model accounts for both hospital and prehospital systems, and quantifies acute medical disasters. Prospective applications of various components of this model are encouraged to further verify its applicability and validity.
Integrated urban systems model with multiple transportation supply agents.
DOT National Transportation Integrated Search
2012-10-01
This project demonstrates the feasibility of developing quantitative models that can forecast future networks under : current and alternative transportation planning processes. The current transportation planning process is modeled : based on empiric...
Parallel labeling experiments for pathway elucidation and (13)C metabolic flux analysis.
Antoniewicz, Maciek R
2015-12-01
Metabolic pathway models provide the foundation for quantitative studies of cellular physiology through the measurement of intracellular metabolic fluxes. For model organisms metabolic models are well established, with many manually curated genome-scale model reconstructions, gene knockout studies and stable-isotope tracing studies. However, for non-model organisms a similar level of knowledge is often lacking. Compartmentation of cellular metabolism in eukaryotic systems also presents significant challenges for quantitative (13)C-metabolic flux analysis ((13)C-MFA). Recently, innovative (13)C-MFA approaches have been developed based on parallel labeling experiments, the use of multiple isotopic tracers and integrated data analysis, that allow more rigorous validation of pathway models and improved quantification of metabolic fluxes. Applications of these approaches open new research directions in metabolic engineering, biotechnology and medicine. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Quantitative Decision Support Requires Quantitative User Guidance
NASA Astrophysics Data System (ADS)
Smith, L. A.
2009-12-01
Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output for a given problem is presented. Based on climate science, meteorology, and the details of the question in hand, this approach identifies necessary (never sufficient) conditions required for the rational use of climate model output in quantitative decision support tools. Inasmuch as climate forecasting is a problem of extrapolation, there will always be harsh limits on our ability to establish where a model is fit for purpose, this does not, however, limit us from identifying model noise as such, and thereby avoiding some cases of the misapplication and over interpretation of model output. It is suggested that failure to clearly communicate the limits of today’s climate model in providing quantitative decision relevant climate information to today’s users of climate information, would risk the credibility of tomorrow’s climate science and science based policy more generally.
A classical model for closed-loop diagrams of binary liquid mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnitzler, J.v.; Prausnitz, J.M.
1994-03-01
A classical lattice model for closed-loop temperature-composition phase diagrams has been developed. It considers the effect of specific interactions, such as hydrogen bonding, between dissimilar components. This van Laar-type model includes a Flory-Huggins term for the excess entropy of mixing. It is applied to several liquid-liquid equilibria of nonelectrolytes, where the molecules of the two components differ in size. The model is able to represent the observed data semi-quantitatively, but in most cases it is not flexible enough to predict all parts of the closed loop quantitatively. The ability of the model to represent different binary systems is discussed. Finally,more » attention is given to a correction term, concerning the effect of concentration fluctuations near the upper critical solution temperature.« less
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
Krueger, Robert F.; Markon, Kristian E.; Patrick, Christopher J.; Benning, Stephen D.; Kramer, Mark D.
2008-01-01
Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena. PMID:18020714
Studying Biology to Understand Risk: Dosimetry Models and Quantitative Adverse Outcome Pathways
Confidence in the quantitative prediction of risk is increased when the prediction is based to as great an extent as possible on the relevant biological factors that constitute the pathway from exposure to adverse outcome. With the first examples now over 40 years old, physiologi...
New Statistical Techniques for Evaluating Longitudinal Models.
ERIC Educational Resources Information Center
Murray, James R.; Wiley, David E.
A basic methodological approach in developmental studies is the collection of longitudinal data. Behavioral data cen take at least two forms, qualitative (or discrete) and quantitative. Both types are fallible. Measurement errors can occur in quantitative data and measures of these are based on error variance. Qualitative or discrete data can…
Collegiate Grading Practices and the Gender Pay Gap.
ERIC Educational Resources Information Center
Dowd, Alicia C.
2000-01-01
Presents a theoretical analysis showing that relatively low grading quantitative fields and high grading verbal fields create a disincentive for college women to invest in quantitative study. Extends research by R. Sabot and J. Wakeman-Linn. Models pressures on grading practices using higher education production functions. (Author/SLD)
A Transformative Model for Undergraduate Quantitative Biology Education
ERIC Educational Resources Information Center
Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…
Examining Stress in Graduate Assistants: Combining Qualitative and Quantitative Survey Methods
ERIC Educational Resources Information Center
Mazzola, Joseph J.; Walker, Erin J.; Shockley, Kristen M.; Spector, Paul E.
2011-01-01
The aim of this study was to employ qualitative and quantitative survey methods in a concurrent mixed model design to assess stressors and strains in graduate assistants. The stressors most frequently reported qualitatively were work overload, interpersonal conflict, and organizational constraints; the most frequently reported psychological…
ERIC Educational Resources Information Center
Shandra, John M.; Nobles, Jenna E.; London, Bruce; Williamson, John B.
2005-01-01
This study presents quantitative, sociological models designed to account for cross-national variation in child mortality. We consider variables linked to five different theoretical perspectives that include the economic modernization, social modernization, political modernization, ecological-evolutionary, and dependency perspectives. The study is…
ERIC Educational Resources Information Center
Caglayan, Günhan
2013-01-01
This study is about prospective secondary mathematics teachers' understanding and sense making of representational quantities generated by algebra tiles, the quantitative units (linear vs. areal) inherent in the nature of these quantities, and the quantitative addition and multiplication operations--referent preserving versus referent…
Tranca, D. E.; Stanciu, S. G.; Hristu, R.; Stoichita, C.; Tofail, S. A. M.; Stanciu, G. A.
2015-01-01
A new method for high-resolution quantitative measurement of the dielectric function by using scattering scanning near-field optical microscopy (s-SNOM) is presented. The method is based on a calibration procedure that uses the s-SNOM oscillating dipole model of the probe-sample interaction and quantitative s-SNOM measurements. The nanoscale capabilities of the method have the potential to enable novel applications in various fields such as nano-electronics, nano-photonics, biology or medicine. PMID:26138665
Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography
Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier
2015-01-01
This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371
NASA Astrophysics Data System (ADS)
Setiani, C.; Waluya, S. B.; Wardono
2018-03-01
The purposes of this research are: (1) to identify learning quality in Model Eliciting Activities (MEAs) using a Metaphorical Thinking (MT) approach regarding qualitative and quantitative; (2) to analyze mathematical literacy of students based on Self-Efficacy (SE). This research is mixed method concurrent embedded design with qualitative research as the primary method. The quantitative research used quasi-experimental with non-equivalent control group design. The population is VIII grade students of SMP Negeri 3 Semarang Indonesia. Quantitative data is examined by conducting completeness mean test, standard completeness test, mean differentiation test and proportional differentiation test. Qualitative data is analyzed descriptively. The result of this research shows that MEAs learning using MT approach accomplishes good criteria both quantitatively and qualitatively. Students with low self-efficacy can identify problems, but they are lack ability to arrange problem-solving strategy on mathematical literacy questions. Students with medium self-efficacy can identify information provided in issues, but they find difficulties to use math symbols in making a representation. Students with high self-efficacy are excellent to represent problems into mathematical models as well as figures by using appropriate symbols and tools, so they can arrange strategy easily to solve mathematical literacy questions.
Erokwu, Bernadette O; Anderson, Christian E; Flask, Chris A; Dell, Katherine M
2018-05-01
BackgroundAutosomal recessive polycystic kidney disease (ARPKD) is associated with significant mortality and morbidity, and currently, there are no disease-specific treatments available for ARPKD patients. One major limitation in establishing new therapies for ARPKD is a lack of sensitive measures of kidney disease progression. Magnetic resonance imaging (MRI) can provide multiple quantitative assessments of the disease.MethodsWe applied quantitative image analysis of high-resolution (noncontrast) T2-weighted MRI techniques to study cystic kidney disease progression and response to therapy in the PCK rat model of ARPKD.ResultsSerial imaging over a 2-month period demonstrated that renal cystic burden (RCB, %)=[total cyst volume (TCV)/total kidney volume (TKV) × 100], TCV, and, to a lesser extent, TKV detected cystic kidney disease progression, as well as the therapeutic effect of octreotide, a clinically available medication shown previously to slow both kidney and liver disease progression in this model. All three MRI measures correlated significantly with histologic measures of renal cystic area, although the correlation of RCB and TCV was stronger than that of TKV.ConclusionThese preclinical MRI results provide a basis for applying these quantitative MRI techniques in clinical studies, to stage and measure progression in human ARPKD kidney disease.
Lipiäinen, Tiina; Fraser-Miller, Sara J; Gordon, Keith C; Strachan, Clare J
2018-02-05
This study considers the potential of low-frequency (terahertz) Raman spectroscopy in the quantitative analysis of ternary mixtures of solid-state forms. Direct comparison between low-frequency and mid-frequency spectral regions for quantitative analysis of crystal form mixtures, without confounding sampling and instrumental variations, is reported for the first time. Piroxicam was used as a model drug, and the low-frequency spectra of piroxicam forms β, α2 and monohydrate are presented for the first time. These forms show clear spectral differences in both the low- and mid-frequency regions. Both spectral regions provided quantitative models suitable for predicting the mixture compositions using partial least squares regression (PLSR), but the low-frequency data gave better models, based on lower errors of prediction (2.7, 3.1 and 3.2% root-mean-square errors of prediction [RMSEP] values for the β, α2 and monohydrate forms, respectively) than the mid-frequency data (6.3, 5.4 and 4.8%, for the β, α2 and monohydrate forms, respectively). The better performance of low-frequency Raman analysis was attributed to larger spectral differences between the solid-state forms, combined with a higher signal-to-noise ratio. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantitative prediction of oral cancer risk in patients with oral leukoplakia.
Liu, Yao; Li, Yicheng; Fu, Yue; Liu, Tong; Liu, Xiaoyong; Zhang, Xinyan; Fu, Jie; Guan, Xiaobing; Chen, Tong; Chen, Xiaoxin; Sun, Zheng
2017-07-11
Exfoliative cytology has been widely used for early diagnosis of oral squamous cell carcinoma. We have developed an oral cancer risk index using DNA index value to quantitatively assess cancer risk in patients with oral leukoplakia, but with limited success. In order to improve the performance of the risk index, we collected exfoliative cytology, histopathology, and clinical follow-up data from two independent cohorts of normal, leukoplakia and cancer subjects (training set and validation set). Peaks were defined on the basis of first derivatives with positives, and modern machine learning techniques were utilized to build statistical prediction models on the reconstructed data. Random forest was found to be the best model with high sensitivity (100%) and specificity (99.2%). Using the Peaks-Random Forest model, we constructed an index (OCRI2) as a quantitative measurement of cancer risk. Among 11 leukoplakia patients with an OCRI2 over 0.5, 4 (36.4%) developed cancer during follow-up (23 ± 20 months), whereas 3 (5.3%) of 57 leukoplakia patients with an OCRI2 less than 0.5 developed cancer (32 ± 31 months). OCRI2 is better than other methods in predicting oral squamous cell carcinoma during follow-up. In conclusion, we have developed an exfoliative cytology-based method for quantitative prediction of cancer risk in patients with oral leukoplakia.
Virus replication as a phenotypic version of polynucleotide evolution.
Antoneli, Fernando; Bosco, Francisco; Castro, Diogo; Janini, Luiz Mario
2013-04-01
In this paper, we revisit and adapt to viral evolution an approach based on the theory of branching process advanced by Demetrius et al. (Bull. Math. Biol. 46:239-262, 1985), in their study of polynucleotide evolution. By taking into account beneficial effects, we obtain a non-trivial multivariate generalization of their single-type branching process model. Perturbative techniques allows us to obtain analytical asymptotic expressions for the main global parameters of the model, which lead to the following rigorous results: (i) a new criterion for "no sure extinction", (ii) a generalization and proof, for this particular class of models, of the lethal mutagenesis criterion proposed by Bull et al. (J. Virol. 18:2930-2939, 2007), (iii) a new proposal for the notion of relaxation time with a quantitative prescription for its evaluation, (iv) the quantitative description of the evolution of the expected values in four distinct "stages": extinction threshold, lethal mutagenesis, stationary "equilibrium", and transient. Finally, based on these quantitative results, we are able to draw some qualitative conclusions.
Dynamics of cullin-RING ubiquitin ligase network revealed by systematic quantitative proteomics
Bennett, Eric J.; Rush, John; Gygi, Steven P.; Harper, J. Wade
2010-01-01
Dynamic reorganization of signaling systems frequently accompany pathway perturbations, yet quantitative studies of network remodeling by pathway stimuli are lacking. Here, we report the development of a quantitative proteomics platform centered on multiplex Absolute Quantification (AQUA) technology to elucidate the architecture of the cullin-RING ubiquitin ligase (CRL) network and to evaluate current models of dynamic CRL remodeling. Current models suggest that CRL complexes are controlled by cycles of CRL deneddylation and CAND1 binding. Contrary to expectations, acute CRL inhibition with MLN4924, an inhibitor of the NEDD8-activating enzyme, does not result in a global reorganization of the CRL network. Examination of CRL complex stoichiometry reveals that, independent of cullin neddylation, a large fraction of cullins are assembled with adaptor modules while only a small fraction are associated with CAND1. These studies suggest an alternative model of CRL dynamicity where the abundance of adaptor modules, rather than cycles of neddylation and CAND1 binding, drives CRL network organization. PMID:21145461
Dynamics of cullin-RING ubiquitin ligase network revealed by systematic quantitative proteomics.
Bennett, Eric J; Rush, John; Gygi, Steven P; Harper, J Wade
2010-12-10
Dynamic reorganization of signaling systems frequently accompanies pathway perturbations, yet quantitative studies of network remodeling by pathway stimuli are lacking. Here, we report the development of a quantitative proteomics platform centered on multiplex absolute quantification (AQUA) technology to elucidate the architecture of the cullin-RING ubiquitin ligase (CRL) network and to evaluate current models of dynamic CRL remodeling. Current models suggest that CRL complexes are controlled by cycles of CRL deneddylation and CAND1 binding. Contrary to expectations, acute CRL inhibition with MLN4924, an inhibitor of the NEDD8-activating enzyme, does not result in a global reorganization of the CRL network. Examination of CRL complex stoichiometry reveals that, independent of cullin neddylation, a large fraction of cullins are assembled with adaptor modules, whereas only a small fraction are associated with CAND1. These studies suggest an alternative model of CRL dynamicity where the abundance of adaptor modules, rather than cycles of neddylation and CAND1 binding, drives CRL network organization. Copyright © 2010 Elsevier Inc. All rights reserved.
Fitness to work of astronauts in conditions of action of the extreme emotional factors
NASA Astrophysics Data System (ADS)
Prisniakova, L. M.
2004-01-01
The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection.
Quantitative Analysis of the Efficiency of OLEDs.
Sim, Bomi; Moon, Chang-Ki; Kim, Kwon-Hyeon; Kim, Jang-Joo
2016-12-07
We present a comprehensive model for the quantitative analysis of factors influencing the efficiency of organic light-emitting diodes (OLEDs) as a function of the current density. The model takes into account the contribution made by the charge carrier imbalance, quenching processes, and optical design loss of the device arising from various optical effects including the cavity structure, location and profile of the excitons, effective radiative quantum efficiency, and out-coupling efficiency. Quantitative analysis of the efficiency can be performed with an optical simulation using material parameters and experimental measurements of the exciton profile in the emission layer and the lifetime of the exciton as a function of the current density. This method was applied to three phosphorescent OLEDs based on a single host, mixed host, and exciplex-forming cohost. The three factors (charge carrier imbalance, quenching processes, and optical design loss) were influential in different ways, depending on the device. The proposed model can potentially be used to optimize OLED configurations on the basis of an analysis of the underlying physical processes.
Fitness to work of astronauts in conditions of action of the extreme emotional factors.
Prisniakova, L M
2004-01-01
The theoretical model for the quantitative determination of influence of a level of emotional exertion on the success of human activity is presented. The learning curves of fixed words in the groups with a different level of the emotional exertion are analyzed. The obtained magnitudes of time constant T depending on a type of the emotional exertion are a quantitative measure of the emotional exertion. Time constants could also be of use for a prediction of the characteristic of fitness to work of an astronaut in conditions of extreme factors. The inverse of the sign of influencing on efficiency of activity of the man is detected. The paper offers a mathematical model of the relation between successful activity and motivations or the emotional exertion (Yerkes-Dodson law). Proposed models can serve by the theoretical basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of their selection. Published by Elsevier Ltd on behalf of COSPAR.
The Matching Relation and Situation-Specific Bias Modulation in Professional Football Play Selection
Stilling, Stephanie T; Critchfield, Thomas S
2010-01-01
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models. PMID:21119855
Alcaráz, Mirta R; Vera-Candioti, Luciana; Culzoni, María J; Goicoechea, Héctor C
2014-04-01
This paper presents the development of a capillary electrophoresis method with diode array detector coupled to multivariate curve resolution-alternating least squares (MCR-ALS) to conduct the resolution and quantitation of a mixture of six quinolones in the presence of several unexpected components. Overlapping of time profiles between analytes and water matrix interferences were mathematically solved by data modeling with the well-known MCR-ALS algorithm. With the aim of overcoming the drawback originated by two compounds with similar spectra, a special strategy was implemented to model the complete electropherogram instead of dividing the data in the region as usually performed in previous works. The method was first applied to quantitate analytes in standard mixtures which were randomly prepared in ultrapure water. Then, tap water samples spiked with several interferences were analyzed. Recoveries between 76.7 and 125 % and limits of detection between 5 and 18 μg L(-1) were achieved.
Noninvasive identification of the total peripheral resistance baroreflex
NASA Technical Reports Server (NTRS)
Mukkamala, Ramakrishna; Toska, Karin; Cohen, Richard J.
2003-01-01
We propose two identification algorithms for quantitating the total peripheral resistance (TPR) baroreflex, an important contributor to short-term arterial blood pressure (ABP) regulation. Each algorithm analyzes beat-to-beat fluctuations in ABP and cardiac output, which may both be obtained noninvasively in humans. For a theoretical evaluation, we applied both algorithms to a realistic cardiovascular model. The results contrasted with only one of the algorithms proving to be reliable. This algorithm was able to track changes in the static gains of both the arterial and cardiopulmonary TPR baroreflex. We then applied both algorithms to a preliminary set of human data and obtained contrasting results much like those obtained from the cardiovascular model, thereby making the theoretical evaluation results more meaningful. This study suggests that, with experimental testing, the reliable identification algorithm may provide a powerful, noninvasive means for quantitating the TPR baroreflex. This study also provides an example of the role that models can play in the development and initial evaluation of algorithms aimed at quantitating important physiological mechanisms.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage.
Xia, Runchuan; Zhou, Jianting; Zhang, Hong; Liao, Leng; Zhao, Ruiqiang; Zhang, Zeyu
2018-05-02
This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL) signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model) were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values ( B xL ( x,z ) curves) agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Rose, Anna; Moon, Simon; Dallman, Margaret J; Stumpf, Michael P H
2011-10-04
Chaos and oscillations continue to capture the interest of both the scientific and public domains. Yet despite the importance of these qualitative features, most attempts at constructing mathematical models of such phenomena have taken an indirect, quantitative approach, for example, by fitting models to a finite number of data points. Here we develop a qualitative inference framework that allows us to both reverse-engineer and design systems exhibiting these and other dynamical behaviours by directly specifying the desired characteristics of the underlying dynamical attractor. This change in perspective from quantitative to qualitative dynamics, provides fundamental and new insights into the properties of dynamical systems.
Mager, P P; Rothe, H
1990-10-01
Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.
Wires in the soup: quantitative models of cell signaling
Cheong, Raymond; Levchenko, Andre
2014-01-01
Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655
Using enterprise architecture to analyse how organisational structure impact motivation and learning
NASA Astrophysics Data System (ADS)
Närman, Pia; Johnson, Pontus; Gingnell, Liv
2016-06-01
When technology, environment, or strategies change, organisations need to adjust their structures accordingly. These structural changes do not always enhance the organisational performance as intended partly because organisational developers do not understand the consequences of structural changes in performance. This article presents a model-based analysis framework for quantitative analysis of the effect of organisational structure on organisation performance in terms of employee motivation and learning. The model is based on Mintzberg's work on organisational structure. The quantitative analysis is formalised using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) and implemented in an enterprise architecture tool.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru
2009-04-27
Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann
2009-02-01
Pancreatic adenocarcinoma has a five-year survival rate of only 4%, largely because an effective procedure for early detection has not been developed. In this study, mathematical modeling of reflectance and fluorescence spectra was utilized to quantitatively characterize differences between normal pancreatic tissue, pancreatitis, and pancreatic adenocarcinoma. Initial attempts at separating the spectra of different tissue types involved dividing fluorescence by reflectance, and removing absorption artifacts by applying a "reverse Beer-Lambert factor" when the absorption coefficient was modeled as a linear combination of the extinction coefficients of oxy- and deoxy-hemoglobin. These procedures demonstrated the need for a more complete mathematical model to quantitatively describe fluorescence and reflectance for minimally-invasive fiber-based optical diagnostics in the pancreas.
Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R
2008-09-01
Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.
A transformative model for undergraduate quantitative biology education.
Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.
Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina
2018-01-01
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age.
Baños, Núria; Perez-Moreno, Alvaro; Migliorelli, Federico; Triginer, Laura; Cobo, Teresa; Bonet-Carne, Elisenda; Gratacos, Eduard; Palacio, Montse
2017-01-01
Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant. © 2016 S. Karger AG, Basel.
A Transformative Model for Undergraduate Quantitative Biology Education
Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.
2010-01-01
The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949
Crotta, Matteo; Paterlini, Franco; Rizzi, Rita; Guitian, Javier
2016-02-01
Foodborne disease as a result of raw milk consumption is an increasing concern in Western countries. Quantitative microbial risk assessment models have been used to estimate the risk of illness due to different pathogens in raw milk. In these models, the duration and temperature of storage before consumption have a critical influence in the final outcome of the simulations and are usually described and modeled as independent distributions in the consumer phase module. We hypothesize that this assumption can result in the computation, during simulations, of extreme scenarios that ultimately lead to an overestimation of the risk. In this study, a sensorial analysis was conducted to replicate consumers' behavior. The results of the analysis were used to establish, by means of a logistic model, the relationship between time-temperature combinations and the probability that a serving of raw milk is actually consumed. To assess our hypothesis, 2 recently published quantitative microbial risk assessment models quantifying the risks of listeriosis and salmonellosis related to the consumption of raw milk were implemented. First, the default settings described in the publications were kept; second, the likelihood of consumption as a function of the length and temperature of storage was included. When results were compared, the density of computed extreme scenarios decreased significantly in the modified model; consequently, the probability of illness and the expected number of cases per year also decreased. Reductions of 11.6 and 12.7% in the proportion of computed scenarios in which a contaminated milk serving was consumed were observed for the first and the second study, respectively. Our results confirm that overlooking the time-temperature dependency may yield to an important overestimation of the risk. Furthermore, we provide estimates of this dependency that could easily be implemented in future quantitative microbial risk assessment models of raw milk pathogens. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Astrophysics Data System (ADS)
Son, Seok-Woo; Han, Bo-Reum; Garfinkel, Chaim I.; Kim, Seo-Yeon; Park, Rokjin; Abraham, N. Luke; Akiyoshi, Hideharu; Archibald, Alexander T.; Butchart, N.; Chipperfield, Martyn P.; Dameris, Martin; Deushi, Makoto; Dhomse, Sandip S.; Hardiman, Steven C.; Jöckel, Patrick; Kinnison, Douglas; Michou, Martine; Morgenstern, Olaf; O’Connor, Fiona M.; Oman, Luke D.; Plummer, David A.; Pozzer, Andrea; Revell, Laura E.; Rozanov, Eugene; Stenke, Andrea; Stone, Kane; Tilmes, Simone; Yamashita, Yousuke; Zeng, Guang
2018-05-01
The Southern Hemisphere (SH) zonal-mean circulation change in response to Antarctic ozone depletion is re-visited by examining a set of the latest model simulations archived for the Chemistry-Climate Model Initiative (CCMI) project. All models reasonably well reproduce Antarctic ozone depletion in the late 20th century. The related SH-summer circulation changes, such as a poleward intensification of westerly jet and a poleward expansion of the Hadley cell, are also well captured. All experiments exhibit quantitatively the same multi-model mean trend, irrespective of whether the ocean is coupled or prescribed. Results are also quantitatively similar to those derived from the Coupled Model Intercomparison Project phase 5 (CMIP5) high-top model simulations in which the stratospheric ozone is mostly prescribed with monthly- and zonally-averaged values. These results suggest that the ozone-hole-induced SH-summer circulation changes are robust across the models irrespective of the specific chemistry-atmosphere-ocean coupling.
Magnetic Resonance-based Motion Correction for Quantitative PET in Simultaneous PET-MR Imaging.
Rakvongthai, Yothin; El Fakhri, Georges
2017-07-01
Motion degrades image quality and quantitation of PET images, and is an obstacle to quantitative PET imaging. Simultaneous PET-MR offers a tool that can be used for correcting the motion in PET images by using anatomic information from MR imaging acquired concurrently. Motion correction can be performed by transforming a set of reconstructed PET images into the same frame or by incorporating the transformation into the system model and reconstructing the motion-corrected image. Several phantom and patient studies have validated that MR-based motion correction strategies have great promise for quantitative PET imaging in simultaneous PET-MR. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Pilten, Gulhiz
2016-01-01
The purpose of the present research is investigating the effects of reciprocal teaching in comprehending expository texts. The research was designed with mixed method. The quantitative dimension of the present research was designed in accordance with pre-test-post-test control group experiment model. The quantitative dimension of the present…
ERIC Educational Resources Information Center
Whittingham, Keith L.
2006-01-01
The traditional core Masters in Business Administration (MBA) curriculum consists of a broad range of courses that can be considered as a whole, or divided into qualitative and quantitative courses. Regression models were developed with "QualGPA" and "QuantGPA" as response variables, and gender, pre-MBA academic indicators, and…
ERIC Educational Resources Information Center
Owens, Susan T.
2017-01-01
Technology is becoming an integral tool in the classroom and can make a positive impact on how the students learn. This quantitative comparative research study examined gender-based differences among secondary Advanced Placement (AP) Statistic students comparing Educational Testing Service (ETS) College Board AP Statistic examination scores…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-20
.... Please note that EPA's policy is that public comments, whether submitted electronically or in paper, will... learning to perform quantitative hot-spot analyses; new burden associated with using the MOVES model for..., adjustment for increased burden associated with quantitative hot-spot analyses, an adjustment for the...
Quantitative 13C NMR characterization of fast pyrolysis oils
Happs, Renee M.; Lisa, Kristina; Ferrell, III, Jack R.
2016-10-20
Quantitative 13C NMR analysis of model catalytic fast pyrolysis (CFP) oils following literature procedures showed poor agreement for aromatic hydrocarbons between NMR measured concentrations and actual composition. Furthermore, modifying integration regions based on DEPT analysis for aromatic carbons resulted in better agreement. Solvent effects were also investigated for hydrotreated CFP oil.
NASA Astrophysics Data System (ADS)
Reineker, P.; Kenkre, V. M.; Kühne, R.
1981-08-01
A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.
ERIC Educational Resources Information Center
Brown, Aaron D.
2016-01-01
The intent of this research is to offer a quantitative analysis of self-determined faculty motivation within the current corporate model of higher education across public and private research universities. With such a heightened integration of accountability structures, external reward systems, and the ongoing drive for more money and…
Quantitative 13C NMR characterization of fast pyrolysis oils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happs, Renee M.; Lisa, Kristina; Ferrell, III, Jack R.
Quantitative 13C NMR analysis of model catalytic fast pyrolysis (CFP) oils following literature procedures showed poor agreement for aromatic hydrocarbons between NMR measured concentrations and actual composition. Furthermore, modifying integration regions based on DEPT analysis for aromatic carbons resulted in better agreement. Solvent effects were also investigated for hydrotreated CFP oil.
ERIC Educational Resources Information Center
Castillo, Alan F.
2014-01-01
The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…
Simulating the Effects of Alternative Forest Management Strategies on Landscape Structure
Eric J. Gustafson; Thomas Crow
1996-01-01
Quantitative, spatial tools are needed to assess the long-term spatial consequences of alternative management strategies for land use planning and resource management. We constructed a timber harvest allocation model (HARVEST) that provides a visual and quantitative means to predict the spatial pattern of forest openings produced by alternative harvest strategies....
Implementation of the Moodle System into EFL Classes
ERIC Educational Resources Information Center
Gunduz, Nuket; Ozcan, Deniz
2017-01-01
This study aims to examine students' perception on using the Moodle system in secondary school in English as a foreign language lessons. A mixed method approach was used in this study with qualitative and quantitative research models. The study group consisted of 333 students and 12 English language teachers. The quantitative data were collected…
Economic analysis of light brown apple moth using GIS and quantitative modeling
Glenn Fowler; Lynn Garrett; Alison Neeley; Roger Magarey; Dan Borchert; Brian Spears
2011-01-01
We conducted an economic analysis of the light brown apple moth (LBAM), (piphyas postvittana (Walker)), whose presence in California has resulted in a regulatory program. Our objective was to quantitatively characterize the economic costs to apple, grape, orange, and pear crops that would result from LBAM's introduction into the continental...
Strengthening Student Engagement with Quantitative Subjects in a Business Faculty
ERIC Educational Resources Information Center
Warwick, Jon; Howard, Anna
2014-01-01
This paper reflects on the results of research undertaken at a large UK university relating to the teaching of quantitative subjects within a Business Faculty. It builds on a simple model of student engagement and, through the description of three case studies, describes research undertaken and developments implemented to strengthen aspects of the…
NASA Astrophysics Data System (ADS)
Wang, Pin; Bista, Rajan K.; Khalbuss, Walid E.; Qiu, Wei; Uttam, Shikhar; Staton, Kevin; Zhang, Lin; Brentnall, Teresa A.; Brand, Randall E.; Liu, Yang
2010-11-01
Definitive diagnosis of malignancy is often challenging due to limited availability of human cell or tissue samples and morphological similarity with certain benign conditions. Our recently developed novel technology-spatial-domain low-coherence quantitative phase microscopy (SL-QPM)-overcomes the technical difficulties and enables us to obtain quantitative information about cell nuclear architectural characteristics with nanoscale sensitivity. We explore its ability to improve the identification of malignancy, especially in cytopathologically non-cancerous-appearing cells. We perform proof-of-concept experiments with an animal model of colorectal carcinogenesis-APCMin mouse model and human cytology specimens of colorectal cancer. We show the ability of in situ nanoscale nuclear architectural characteristics in identifying cancerous cells, especially in those labeled as ``indeterminate or normal'' by expert cytopathologists. Our approach is based on the quantitative analysis of the cell nucleus on the original cytology slides without additional processing, which can be readily applied in a conventional clinical setting. Our simple and practical optical microscopy technique may lead to the development of novel methods for early detection of cancer.