Science.gov

Sample records for achieve maximum sensitivity

  1. Achieving Maximum Power in Thermoelectric Generation with Simple Power Electronics

    NASA Astrophysics Data System (ADS)

    Youn, Nari; Lee, Hohyun; Wee, Daehyun; Gomez, Miguel; Reid, Rachel; Ohara, Brandon

    2014-06-01

    A thermoelectric generator typically delivers a relatively low power output, and hence it is of great practical importance to determine a design and operating condition close to those which can provide the maximum attainable power. To maintain a favorable condition for the maximum power output, power electronics circuits are usually applied. One of the simplest methods is to control the operating voltage at half the open-circuit voltage, assuming that the typical impedance-matching condition, in which the load and internal resistances are matched, yields the maximum power output. However, recent investigations have shown that, when external thermal resistances exist between the thermoelectric modules and thermal reservoirs, the impedance-matching condition is not identical to the condition for the maximum power output. In this article, it is argued that, although the impedance-matching condition is not the condition for maximum power output, the maximum power is still achievable when the operating voltage is kept at half the open-circuit voltage. More precisely, it is shown that the typical V- I curve for thermoelectric generators must show approximately linear behavior, which justifies the use of a simple strategy in thermoelectric power generation applications. The conditions for the validity of the approximation are mathematically discussed, supported by a few examples. Experimental evidence at room temperature is also provided.

  2. The optimal polarizations for achieving maximum contrast in radar images

    NASA Technical Reports Server (NTRS)

    Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.

  3. Convective gas flow development and the maximum depths achieved by helophyte vegetation in lakes

    PubMed Central

    Sorrell, Brian K.; Hawes, Ian

    2010-01-01

    Background and Aims Convective gas flow in helophytes (emergent aquatic plants) is thought to be an important adaptation for the ability to colonize deep water. In this study, the maximum depths achieved by seven helophytes were compared in 17 lakes differing in nutrient enrichment, light attenuation, shoreline exposure and sediment characteristics to establish the importance of convective flow for their ability to form the deepest helophyte vegetation in different environments. Methods Convective gas flow development was compared amongst the seven species, and species were allocated to ‘flow absent’, ‘low flow’ and ‘high flow’ categories. Regression tree analysis and quantile regression analysis were used to determine the roles of flow category, lake water quality, light attenuation and shoreline exposure on maximum helophyte depths. Key Results Two ‘flow absent’ species were restricted to very shallow water in all lakes and their depths were not affected by any environmental parameters. Three ‘low flow’ and two ‘high flow’ species had wide depth ranges, but ‘high flow’ species formed the deepest vegetation far more frequently than ‘low flow’ species. The ‘low flow’ species formed the deepest vegetation most commonly in oligotrophic lakes where oxygen demands in sediments were low, especially on exposed shorelines. The ‘high flow’ species were almost always those forming the deepest vegetation in eutrophic lakes, with Eleocharis sphacelata predominant when light attenuation was low, and Typha orientalis when light attenuation was high. Depths achieved by all five species with convective flow were limited by shoreline exposure, but T. orientalis was the least exposure-sensitive species. Conclusions Development of convective flow appears to be essential for dominance of helophyte species in >0·5 m depth, especially under eutrophic conditions. Exposure, sediment characteristics and light attenuation frequently constrain them

  4. Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput

    SciTech Connect

    Jin, Guojun; Tierney, Brian

    2003-01-31

    Distinguishing available bandwidth and achievable throughput is essential for improving network applications' performance. Achievable throughput is the throughput considering a number of factors such as network protocol, host speed, network path, and TCP buffer space, where as available bandwidth only considers the network path. Without understanding this difference, trying to improve network applications' performance is like ''blind men feeling the elephant'' [4]. In this paper, we define and distinguish bandwidth and throughput, and debate which part of each is achievable and which is available. Also, we introduce and discuss a new concept - Maximum Burst Size that is crucial to the network performance and bandwidth sharing. A tool, netest, is introduced to help users to determine the available bandwidth, and provides information to achieve better throughput with fairness of sharing the available bandwidth, thus reducing misuse of the network.

  5. Evaluating the Instructional Sensitivity of Four States' Student Achievement Tests

    ERIC Educational Resources Information Center

    Polikoff, Morgan S.

    2016-01-01

    As state tests of student achievement are used for an increasingly wide array of high- and low-stakes purposes, evaluating their instructional sensitivity is essential. This article uses data from the Bill and Melinda Gates Foundation's Measures of Effective Project to examine the instructional sensitivity of 4 states' mathematics and English…

  6. Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Pareek, Deepak

    Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.

  7. Achieving maximum plant yield in a weightless, bioregenerative system for a space craft.

    PubMed

    Salisbury, F B

    1984-01-01

    Limitations to maximum plant yield are photosynthesis, respiration, and harvest index (edible/total biomass). Our best results with wheat equal 97.5 g total biomass m-2 day-1. Theoretical maximums for our continuous 900 micromoles photons m-2 s-1 = 175 g carbohydrate, so our life-cycle efficiency is about 56%. Mineral nutrition has posed problems, but these are now nearly solved. CO2 levels are about 80 micromoles m-3 (1700 ppm; ambient = 330 ppm). We have grown wheat plants successfully under low-pressure sodium lamps. The main factor promising increased yields is canopy development. About half the life cycle is required to develop a canopy that uses light efficiently. At that point, we achieve 89% of maximum theoretical growth, suggesting that most parameters are nearly optimal. The next important frontier concerns application of these techniques to the microgravity environment of a space craft. There are engineering problems connected with circulation of nutrient solutions, for example. Plant responses to microgravity could decrease or increase yields. Leaves become epinastic, grass nodes elongate, and roots grow out of their medium. We are proposing space experiments to study these problems.

  8. Achieving Maximum Power from Thermoelectric Generators with Maximum-Power-Point-Tracking Circuits Composed of a Boost-Cascaded-with-Buck Converter

    NASA Astrophysics Data System (ADS)

    Park, Hyunbin; Sim, Minseob; Kim, Shiho

    2015-06-01

    We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.

  9. States of maximum polarization for a quantum light field and states of a maximum sensitivity in quantum interferometry

    NASA Astrophysics Data System (ADS)

    Peřinová, Vlasta; Lukš, Antonín

    2015-06-01

    The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated.

  10. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    NASA Astrophysics Data System (ADS)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  11. Mind the bubbles: achieving stable measurements of maximum hydraulic conductivity through woody plant samples

    PubMed Central

    Espino, Susana; Schenk, H. Jochen

    2011-01-01

    The maximum specific hydraulic conductivity (kmax) of a plant sample is a measure of the ability of a plants’ vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of kmax are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of kmax are a common problem when measuring woody plant samples and it is commonly observed that kmax declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in kmax under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of kmax were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining kmax values. Our findings suggest that degassing of water is essential for achieving stable and precise

  12. Mind the bubbles: achieving stable measurements of maximum hydraulic conductivity through woody plant samples.

    PubMed

    Espino, Susana; Schenk, H Jochen

    2011-01-01

    The maximum specific hydraulic conductivity (k(max)) of a plant sample is a measure of the ability of a plants' vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of k(max) are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of k(max) are a common problem when measuring woody plant samples and it is commonly observed that k(max) declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in k(max) under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of k(max) were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining k(max) values. Our findings suggest that degassing of water is essential for achieving stable and

  13. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  14. TEACH (Train to Enable/Achieve Culturally Sensitive Healthcare)

    NASA Technical Reports Server (NTRS)

    Maulitz, Russell; Santarelli, Thomas; Barnieu, Joanne; Rosenzweig, Larry; Yi, Na Yi; Zachary, Wayne; OConnor, Bonnie

    2010-01-01

    Personnel from diverse ethnic and demographic backgrounds come together in both civilian and military healthcare systems, facing diagnoses that at one level are equalizers: coronary disease is coronary disease, breast cancer is breast cancer. Yet the expression of disease in individuals from different backgrounds, individual patient experience of disease as a particular illness, and interactions between patients and providers occurring in any given disease scenario, all vary enormously depending on the fortuity of the equation of "which patient happens to arrive in whose exam room." Previously, providers' absorption of lessons-learned depended on learning as an apprentice would when exposed over time to multiple populations. As a result, and because providers are often thrown into situations where communications falter through inadequate direct patient experience, diversity in medicine remains a training challenge. The questions then become: Can simulation and virtual training environments (VTEs) be deployed to short-track and standardize this sort of random-walk problem? Can we overcome the unevenness of training caused by some providers obtaining the valuable exposure to diverse populations, whereas others are left to "sink or swim"? This paper summarizes developing a computer-based VTE called TEACH (Training to Enable/Achieve Culturally Sensitive Healthcare). TEACH was developed to enhance healthcare providers' skills in delivering culturally sensitive care to African-American women with breast cancer. With an authoring system under development to ensure extensibility, TEACH allows users to role-play in clinical oncology settings with virtual characters who interact on the basis of different combinations of African American sub-cultural beliefs regarding breast cancer. The paper reports on the roll-out and evaluation of the degree to which these interactions allow providers to acquire, practice, and refine culturally appropriate communication skills and to

  15. Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?

    NASA Astrophysics Data System (ADS)

    McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan

    2016-07-01

    Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return

  16. Effect of Date and Location on Maximum Achievable Altitude for a Solar Powered Aircraft

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.

    1997-01-01

    The maximum altitude attainable for a solar powered aircraft without any energy storage capability is examined. Mission profiles for a solar powered aircraft were generated over a range of latitudes and dates. These profiles were used to determine which latitude-date combinations produced the highest achieavable altitude. Based on the presented analysis the results have shown that for a given time of year lower latitudes produced higher maximum altitudes. For all the cases examined the time and date which produced the highest altitude was around March at the equator.

  17. Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?

    NASA Technical Reports Server (NTRS)

    McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael

    2016-01-01

    The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.

  18. 40 CFR 63.43 - Maximum achievable control technology (MACT) determinations for constructed and reconstructed...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for Major Sources in Accordance With Clean Air Act Sections, Sections 112(g) and 112(j) § 63.43... achieving such emission reduction and any non-air quality health and environmental impacts and energy..., and analysis of cost and non-air quality health environmental impacts or energy requirements for...

  19. Slip resistance of winter footwear on snow and ice measured using maximum achievable incline.

    PubMed

    Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff

    2016-05-01

    Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear-surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established. PMID:26555738

  20. Slip resistance of winter footwear on snow and ice measured using maximum achievable incline.

    PubMed

    Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff

    2016-05-01

    Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear-surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established.

  1. Slip resistance of winter footwear on snow and ice measured using maximum achievable incline

    PubMed Central

    Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff

    2016-01-01

    Abstract Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear–surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established. PMID:26555738

  2. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  3. Local landscape predictors of maximum stream temperature and thermal sensitivity in the Columbia River Basin, USA.

    PubMed

    Chang, Heejun; Psaris, Mike

    2013-09-01

    Stream temperature regimes are important determinants of the health of lotic ecosystems, and a proper understanding of the landscape factors affecting stream temperatures is needed for water managers to make informed decisions. We analyzed spatial patterns of thermal sensitivity (response of stream temperature to changes in air temperature) and maximum stream temperature for 74 stations in the Columbia River basin, to identify landscape factors affecting these two indices of stream temperature regimes. Thermal sensitivity (TS) is largely controlled by distance to the Pacific Coast, base flow index, and contributing area. Maximum stream temperature (Tmax) is mainly controlled by base flow index, percent forest land cover, and stream order. The analysis of four different spatial scales--relative contributing area (RCA) scale, RCA buffered scale, 1 km upstream RCA scale, and 1 km upstream buffer scale--yield different significant factors, with topographic factors such as slope becoming more important at the buffer scale analysis for TS. Geographically weighted regression (GWR), which takes into account spatial non-stationary processes, better predicts the spatial variations of TS and Tmax with higher R(2) and lower residual values than ordinary least squares (OLS) estimates. With different coefficient values over space, GWR models explain approximately up to 62% of the variation in TS and Tmax. Percent forest land cover coefficients had both positive and negative values, suggesting that the relative importance of forest changes over space. Such spatially varying GWR coefficients are associated with land cover, hydroclimate, and topographic variables. OLS estimated regression residuals are positively autocorrelated over space at the RCA scale, while the GWR residuals exhibit no spatial autocorrelation at all scales. GWR models provide useful additional information on the spatial processes generating the variations of TS and Tmax, potentially serving as a useful tool

  4. Local landscape predictors of maximum stream temperature and thermal sensitivity in the Columbia River Basin, USA.

    PubMed

    Chang, Heejun; Psaris, Mike

    2013-09-01

    Stream temperature regimes are important determinants of the health of lotic ecosystems, and a proper understanding of the landscape factors affecting stream temperatures is needed for water managers to make informed decisions. We analyzed spatial patterns of thermal sensitivity (response of stream temperature to changes in air temperature) and maximum stream temperature for 74 stations in the Columbia River basin, to identify landscape factors affecting these two indices of stream temperature regimes. Thermal sensitivity (TS) is largely controlled by distance to the Pacific Coast, base flow index, and contributing area. Maximum stream temperature (Tmax) is mainly controlled by base flow index, percent forest land cover, and stream order. The analysis of four different spatial scales--relative contributing area (RCA) scale, RCA buffered scale, 1 km upstream RCA scale, and 1 km upstream buffer scale--yield different significant factors, with topographic factors such as slope becoming more important at the buffer scale analysis for TS. Geographically weighted regression (GWR), which takes into account spatial non-stationary processes, better predicts the spatial variations of TS and Tmax with higher R(2) and lower residual values than ordinary least squares (OLS) estimates. With different coefficient values over space, GWR models explain approximately up to 62% of the variation in TS and Tmax. Percent forest land cover coefficients had both positive and negative values, suggesting that the relative importance of forest changes over space. Such spatially varying GWR coefficients are associated with land cover, hydroclimate, and topographic variables. OLS estimated regression residuals are positively autocorrelated over space at the RCA scale, while the GWR residuals exhibit no spatial autocorrelation at all scales. GWR models provide useful additional information on the spatial processes generating the variations of TS and Tmax, potentially serving as a useful tool

  5. Sensitivity of depth of maximum and absorption depth of EAS to hadron production mechanism

    NASA Technical Reports Server (NTRS)

    Antonov, R. A.; Galkin, V. I.; Hein, L. A.; Ivanenko, I. P.; Kanevsky, B. L.; Kuzmin, V. A.

    1985-01-01

    Comparison of experimental data on depth of extensive air showers (EAS) development maximum in the atmosphere, T sub M and path of absorption, lambda, in the lower atmosphere of EAS with fixed particle number in the energy region eV with the results of calculation show that these parameters are sensitive mainly to the inelastic interaction cross section and scaling violation in the fragmentation and pionization region. The data are explained in a unified manner within the framework of a model in which scaling is violated slightly in the fragmentation region and strongly in the pionization region at primary cosmic rays composition close to the normal one and a permanent increase of inelastic interaction cross section. It is shown that, while interpreting the experimental data, disregard of two methodical points causes a systematic shift in T sub M: (1) shower selection system; and (2) EAS electron lateral distribution when performing the calculations on basis of which the transfer is made from the Cerenkov pulse FWHM to the depth of shower maximum, T sub M.

  6. Sensitivity of Last Glacial Maximum climate to uncertainties in tropical and subtropical ocean temperatures

    USGS Publications Warehouse

    Hostetler, S.; Pisias, N.; Mix, A.

    2006-01-01

    The faunal and floral gradients that underlie the CLIMAP (1981) sea-surface temperature (SST) reconstructions for the Last Glacial Maximum (LGM) reflect ocean temperature gradients and frontal positions. The transfer functions used to reconstruct SSTs from biologic gradients are biased, however, because at the warmest sites they display inherently low sensitivity in translating fauna to SST and they underestimate SST within the euphotic zones where the pycnocline is strong. Here we assemble available data and apply a statistical approach to adjust for hypothetical biases in the faunal-based SST estimates of LGM temperature. The largest bias adjustments are distributed in the tropics (to address low sensitivity) and subtropics (to address underestimation in the euphotic zones). The resulting SSTs are generally in better agreement than CLIMAP with recent geochemical estimates of glacial-interglacial temperature changes. We conducted a series of model experiments using the GENESIS general atmospheric circulation model to assess the sensitivity of the climate system to our bias-adjusted SSTs. Globally, the new SST field results in a modeled LGM surface-air cooling relative to present of 6.4 ??C (1.9 ??C cooler than that of CLIMAP). Relative to the simulation with CLIMAP SSTs, modeled precipitation over the oceans is reduced by 0.4 mm d-1 (an anomaly -0.4 versus 0.0 mm d-1 for CLIMAP) and increased over land (an anomaly -0.2 versus -0.5 mm d-1 for CLIMAP). Regionally strong responses are induced by changes in SST gradients. Data-model comparisons indicate improvement in agreement relative to CLIMAP, but differences among terrestrial data inferences and simulated moisture and temperature remain. Our SSTs result in positive mass balance over the northern hemisphere ice sheets (primarily through reduced summer ablation), supporting the hypothesis that tropical and subtropical ocean temperatures may have played a role in triggering glacial changes at higher latitudes.

  7. Sensitivity of the Palaeocene-Eocene Thermal Maximum climate to cloud properties.

    PubMed

    Kiehl, Jeffrey T; Shields, Christine A

    2013-10-28

    The Palaeocene-Eocene Thermal Maximum (PETM) was a significant global warming event in the Earth's history (approx. 55 Ma). The cause for this warming event has been linked to increases in greenhouse gases, specifically carbon dioxide and methane. This rapid warming took place in the presence of the existing Early Eocene warm climate. Given that projected business-as-usual levels of atmospheric carbon dioxide reach concentrations of 800-1100 ppmv by 2100, it is of interest to study past climates where atmospheric carbon dioxide was higher than present. This is especially the case given the difficulty of climate models in simulating past warm climates. This study explores the sensitivity of the simulated pre-PETM and PETM periods to change in cloud condensation nuclei (CCN) and microphysical properties of liquid water clouds. Assuming lower levels of CCN for both of these periods leads to significant warming, especially at high latitudes. The study indicates that past differences in cloud properties may be an important factor in accurately simulating past warm climates. Importantly, additional shortwave warming from such a mechanism would imply lower required atmospheric CO2 concentrations for simulated surface temperatures to be in reasonable agreement with proxy data for the Eocene.

  8. Sensitivity of the Palaeocene-Eocene Thermal Maximum climate to cloud properties.

    PubMed

    Kiehl, Jeffrey T; Shields, Christine A

    2013-10-28

    The Palaeocene-Eocene Thermal Maximum (PETM) was a significant global warming event in the Earth's history (approx. 55 Ma). The cause for this warming event has been linked to increases in greenhouse gases, specifically carbon dioxide and methane. This rapid warming took place in the presence of the existing Early Eocene warm climate. Given that projected business-as-usual levels of atmospheric carbon dioxide reach concentrations of 800-1100 ppmv by 2100, it is of interest to study past climates where atmospheric carbon dioxide was higher than present. This is especially the case given the difficulty of climate models in simulating past warm climates. This study explores the sensitivity of the simulated pre-PETM and PETM periods to change in cloud condensation nuclei (CCN) and microphysical properties of liquid water clouds. Assuming lower levels of CCN for both of these periods leads to significant warming, especially at high latitudes. The study indicates that past differences in cloud properties may be an important factor in accurately simulating past warm climates. Importantly, additional shortwave warming from such a mechanism would imply lower required atmospheric CO2 concentrations for simulated surface temperatures to be in reasonable agreement with proxy data for the Eocene. PMID:24043867

  9. Influence of MoOx interlayer on the maximum achievable open-circuit voltage in organic photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Zou, Yunlong; Holmes, Russell

    2013-03-01

    Transition metal oxides including molybdenum oxide (MoOx) are characterized by large work functions and deep energy levels relative to the organic semiconductors used in photovoltaic cells (OPVs). These materials have been used in OPVs as interlayers between the indium-tin-oxide anode and the active layers to increase the open-circuit voltage (VOC) and power conversion efficiency. We examine the role of MoOx in determining the maximum achievable VOC in planar heterojunction OPVs based on the donor-acceptor pairing of boron subphthalocyanine chloride (SubPc) and C60. While causing minor changes in VOC at room temperature, the inclusion of MoOx significantly changes the temperature dependence of VOC. Devices containing no interlayer show a maximum VOC\\ of 1.2 V, while devices containing MoOx show no saturation in VOC, reaching a value of >1.4 V at 110 K. We propose that the MoOx-SubPc interface forms a dissociating Schottky junction that provides an additional contribution to VOC at low temperature. Separate measurements of photoluminescence confirm that excitons in SubPc can be quenched by MoOx. Charge transfer at this interface is by hole extraction from SubPc to MoOx, and this mechanism favors donors with a deep highest occupied molecular orbital (HOMO) energy level. Consistent with this expectation, the temperature dependence of VOC for devices constructed using a donor with a shallower HOMO level, e.g. copper phthalocyanine, is independent of the presence of MoOx.

  10. Effects of Airfoil Thickness and Maximum Lift Coefficient on Roughness Sensitivity: 1997--1998

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A matrix of airfoils has been developed to determine the effects of airfoil thickness and the maximum lift to leading-edge roughness. The matrix consists of three natural-laminar-flow airfoils, the S901, S902, and S903, for wind turbine applications. The airfoils have been designed and analyzed theoretically and verified experimentally in the Pennsylvania State University low-speed, low-turbulence wind tunnel. The effect of roughness on the maximum life increases with increasing airfoil thickness and decreases slightly with increasing maximum lift. Comparisons of the theoretical and experimental results generally show good agreement.

  11. Dye-sensitized solar cells with 13% efficiency achieved through the molecular engineering of porphyrin sensitizers.

    PubMed

    Mathew, Simon; Yella, Aswani; Gao, Peng; Humphry-Baker, Robin; Curchod, Basile F E; Ashari-Astani, Negar; Tavernelli, Ivano; Rothlisberger, Ursula; Nazeeruddin, Md Khaja; Grätzel, Michael

    2014-03-01

    Dye-sensitized solar cells have gained widespread attention in recent years because of their low production costs, ease of fabrication and tunable optical properties, such as colour and transparency. Here, we report a molecularly engineered porphyrin dye, coded SM315, which features the prototypical structure of a donor-π-bridge-acceptor and both maximizes electrolyte compatibility and improves light-harvesting properties. Linear-response, time-dependent density functional theory was used to investigate the perturbations in the electronic structure that lead to improved light harvesting. Using SM315 with the cobalt(II/III) redox shuttle resulted in dye-sensitized solar cells that exhibit a high open-circuit voltage VOC of 0.91 V, short-circuit current density JSC of 18.1 mA cm(-2), fill factor of 0.78 and a power conversion efficiency of 13%.

  12. Dye-sensitized solar cells with 13% efficiency achieved through the molecular engineering of porphyrin sensitizers

    NASA Astrophysics Data System (ADS)

    Mathew, Simon; Yella, Aswani; Gao, Peng; Humphry-Baker, Robin; Curchod, Basile F. E.; Ashari-Astani, Negar; Tavernelli, Ivano; Rothlisberger, Ursula; Nazeeruddin, Md. Khaja; Grätzel, Michael

    2014-03-01

    Dye-sensitized solar cells have gained widespread attention in recent years because of their low production costs, ease of fabrication and tunable optical properties, such as colour and transparency. Here, we report a molecularly engineered porphyrin dye, coded SM315, which features the prototypical structure of a donor-π-bridge-acceptor and both maximizes electrolyte compatibility and improves light-harvesting properties. Linear-response, time-dependent density functional theory was used to investigate the perturbations in the electronic structure that lead to improved light harvesting. Using SM315 with the cobalt(II/III) redox shuttle resulted in dye-sensitized solar cells that exhibit a high open-circuit voltage VOC of 0.91 V, short-circuit current density JSC of 18.1 mA cm-2, fill factor of 0.78 and a power conversion efficiency of 13%.

  13. Sensitivity to general and specific numerical features in typical achievers and children with mathematics learning disability.

    PubMed

    Rotem, Avital; Henik, Avishai

    2015-01-01

    We examined the development of sensitivity to general and specific numerical features in typical achievers and in 6th and 8th graders with mathematics learning disability (MLD), using two effects in mental multiplication: operand-relatedness (i.e., difficulty in avoiding errors that are related to the operands via a shared multiplication row) and decade-consistency (i.e., difficulty in avoiding errors that are operand related and also share a decade with the true result). Responses to decade-consistent products were quick but erroneous. In line with the processing sequence in adults, children first became sensitive to the general numerical feature of operand-relatedness (typical achievers--from 3rd grade; children with MLD in 8th grade) and only later to the specific feature of decade-consistency (typical achievers--from 4th grade, but only from 6th grade in a mature pattern). Implications of the numerical sensitivity in children with MLD are discussed.

  14. Optimization of NANOGrav's time allocation for maximum sensitivity to single sources

    SciTech Connect

    Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel

    2014-10-20

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.

  15. Sensitivity of estuarine turbidity maximum to settling velocity, tidal mixing, and sediment supply

    USGS Publications Warehouse

    Warner, J.C.; Sherwood, C.R.; Geyer, W.R.; ,

    2007-01-01

    Estuarine turbidity maximum, numerical modeling, settling velocity, stratification The spatial and temporal distribution of suspended material in an Estuarine Turbidity Maxima (ETM) is primarily controlled by particle settling velocity, tidal mixing, shear-stress thresholds for resuspension, and sediment supply. We vary these parameters in numerical experiments of an idealized two-dimensional (x-z) estuary to demonstrate their affects on the development and retention of particles in an ETM. Parameters varied are the settling velocity (0.01, 0.1, and 0.5 mm/s), tidal amplitude (0.4 m 12 hour tide and 0.3 to 0.6 m 14 day spring neap cycle), and sediment availability (spatial supply limited or unlimited; and temporal supply as a riverine pulse during spring vs. neap tide). Results identify that particles with a low settling velocity are advected out of the estuary and particles with a high settling velocity provide little material transport to an ETM. Particles with an intermediate settling velocity develop an ETM with the greatest amount of material retained. For an unlimited supply of sediment the ETM and limit of salt intrusion co-vary during the spring neap cycle. The ETM migrates landward of the salt intrusion during spring tides and seaward during neap tides. For limited sediment supply the ETM does not respond as an erodible pool of sediment that advects landward and seaward with the salt front. The ETM is maintained seaward of the salt intrusion and controlled by the locus of sediment convergence in the bed. For temporal variability of sediment supplied from a riverine pulse, the ETM traps more sediment if the pulse encounters the salt intrusion at neap tides than during spring tides. ?? 2007 Elsevier B.V. All rights reserved.

  16. Response to Marie Paz Morales' "Influence of Culture and Language Sensitive Physics on Science Attitude Achievement"

    ERIC Educational Resources Information Center

    Cole, Mikel Walker

    2015-01-01

    This response to Marie Paz Morales' "Influence of culture and language sensitive physics on science attitude achievement" explores the ideas of culturally responsive pedagogy and critical literacy to examine some implications for culturally responsive science instruction implicit in the original manuscript. [For "Influence of…

  17. Cross-National Estimates of the Effects of Family Background on Student Achievement: A Sensitivity Analysis

    ERIC Educational Resources Information Center

    Nonoyama-Tarumi, Yuko

    2008-01-01

    This article uses the data from the Programme for International Student Assessment (PISA) 2000 to examine whether the influence of family background on educational achievement is sensitive to different measures of the family's socio-economic status (SES). The study finds that, when a multidimensional measure of SES is used, the family background…

  18. Does Sensitivity to Criticism Mediate the Relationship between Theory of Mind and Academic Achievement?

    ERIC Educational Resources Information Center

    Lecce, Serena; Caputi, Marcella; Hughes, Claire

    2011-01-01

    This study adds to the growing research on school outcomes associated with individual differences in preschoolers' theory of mind skills by considering whether "costs" of theory of mind (e.g., sensitivity to criticism) actually help to foster children's academic achievement. A group of 60 Italian children was tested during the last year of…

  19. The Development of Product Parity Sensitivity in Children with Mathematics Learning Disability and in Typical Achievers

    ERIC Educational Resources Information Center

    Rotem, Avital; Henik, Avishai

    2013-01-01

    Parity helps us determine whether an arithmetic equation is true or false. The current research examines the development of sensitivity to parity cues in multiplication in typically achieving (TA) children (grades 2, 3, 4 and 6) and in children with mathematics learning disabilities (MLD, grades 6 and 8), via a verification task. In TA children…

  20. Response to Marie Paz Morales' ``Influence of culture and language sensitive physics on science attitude achievement''

    NASA Astrophysics Data System (ADS)

    Cole, Mikel Walker

    2015-12-01

    This response to Marie Paz Morales' "Influence of culture and language sensitive physics on science attitude achievement" explores the ideas of culturally responsive pedagogy and critical literacy to examine some implications for culturally responsive science instruction implicit in the original manuscript.

  1. Relations between shyness-sensitivity and internalizing problems in Chinese children: moderating effects of academic achievement.

    PubMed

    Chen, Xinyin; Yang, Fan; Wang, Li

    2013-07-01

    Shy-sensitive children are likely to develop adjustment problems in today's urban China as the country has evolved into an increasingly competitive, market-oriented society. The main purpose of this one-year longitudinal study was to examine the moderating effects of academic achievement on relations between shyness-sensitivity and later internalizing problems in Chinese children. A sample of 1171 school-age children (591 boys, 580 girls) in China, initially at the age of 9 years, participated in the study. Data on shyness, academic achievement, and internalizing problems were collected from multiple sources including peer evaluations, teacher ratings, self-reports, and school records. It was found that shyness positively and uniquely predicted later loneliness, depression, and teacher-rated internalizing problems, with the stability effect controlled, for low-achieving children, but not for high-achieving children. The results indicate that, consistent with the stress buffering model, academic achievement may be a buffering factor that serves to protect shy-sensitive children from developing psychological problems. PMID:23318940

  2. Improved PID controller design for unstable time delay processes based on direct synthesis method and maximum sensitivity

    NASA Astrophysics Data System (ADS)

    Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.

    2015-06-01

    In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.

  3. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections... quality health and environmental impacts and energy requirements, determines is achievable by affected... and any non-air quality health and environmental impacts and energy requirements, determines...

  4. Optimization Correction Strength Using Contra Bending Technique without Anterior Release Procedure to Achieve Maximum Correction on Severe Adult Idiopathic Scoliosis

    PubMed Central

    Rahyussalim, Ahmad Jabir; Saleh, Ifran; Purnaning, Dyah; Kurniawati, Tri

    2016-01-01

    Adult scoliosis is defined as a spinal deformity in a skeletally mature patient with a Cobb angle of more than 10 degrees in the coronal plain. Posterior-only approach with rod and screw corrective manipulation to add strength of contra bending manipulation has correction achievement similar to that obtained by conventional combined anterior release and posterior approach. It also avoids the complications related to the thoracic approach. We reported a case of 25-year-old male adult idiopathic scoliosis with double curve. It consists of main thoracic curve of 150 degrees and lumbar curve of 89 degrees. His curve underwent direct contra bending posterior approach using rod and screw corrective manipulation technique to achieve optimal correction. After surgery the main thoracic Cobb angle becomes 83 degrees and lumbar Cobb angle becomes 40 degrees, with 5 days length of stay and less than 800 mL blood loss during surgery. There is no complaint at two months after surgery; he has already come back to normal activity with good functional activity. PMID:27064801

  5. Which Tibial Tray Design Achieves Maximum Coverage and Ideal Rotation: Anatomic, Symmetric, or Asymmetric? An MRI-based study.

    PubMed

    Stulberg, S David; Goyal, Nitin

    2015-10-01

    Two goals of tibial tray placement in TKA are to maximize coverage and establish proper rotation. Our purpose was to utilize MRI information obtained as part of PSI planning to determine the impact of tibial tray design on the relationship between coverage and rotation. MR images for 100 consecutive knees were uploaded into PSI software. Preoperative planning software was used to evaluate 3 different tray designs: anatomic, symmetric, and asymmetric. Approximately equally good coverage was achieved with all three trays. However, the anatomic compared to symmetric/asymmetric trays required less malrotation (0.3° vs 3.0/2.4°; P < 0.001), with a higher proportion of cases within 5° of neutral (97% vs 73/77%; P < 0.001). In this study, the anatomic tibia optimized the relationship between coverage and rotation.

  6. [ADVANCE-ON Trial; How to Achieve Maximum Reduction of Mortality in Patients With Type 2 Diabetes].

    PubMed

    Kanorskiĭ, S G

    2015-01-01

    Of 10,261 patients with type 2 diabetes who survived to the end of a randomized ADVANCE trial 83% were included in the ADVANCE-ON project for observation for 6 years. The difference in the level of blood pressure which had been achieved during 4.5 years of within trial treatment with fixed perindopril/indapamide combination quickly vanished but significant decrease of total and cardiovascular mortality in the group of patients treated with this combination for 4.5 years was sustained during 6 years of post-trial follow-up. The results can be related to gradually weakening protective effect of perindopril/indapamide combination on cardiovascular system, and are indicative of the expedience of long-term use of this antihypertensive therapy for maximal lowering of mortality of patients with diabetes. PMID:26164995

  7. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  8. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-06-14

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models.

  9. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  10. Radiometric Sensitivity to Soil Moisture at 1.4 GHz Through a Corn Crop at Maximum Biomass

    NASA Astrophysics Data System (ADS)

    Hornbuckle, B. K.; England, A. W.

    2004-05-01

    It is generally assumed that brightness at 1.4 GHz is usefully sensitive to soil moisture only up to a certain level of canopy biomass. We have found that this is not true for a corn canopy. We accomplished this by analyzing time-series measurements of 1.4 GHz brightness, soil moisture, and relevant micrometeorology, all of high temporal resolution, made on the plot-scale for a corn canopy at maximum biomass. Our approach was unique: remote sensing studies typically replicate satellite measurements, in which discrete measurements are made at isolated points in time. Our method of integrating nearly continuous observations of brightness, micrometeorology, and soil state revealed how these variables change together as a result of their interdependencies. This method can be used to identify subtle physical processes that might otherwise be hard to find, much like using the context of a sentence to decipher an unknown word as opposed to only examining the word itself. One of the notable features of our experiment was our measurement of surface soil moisture. Buried TDR instruments, calibrated in-situ with periodic, hand-held impedance probe measurements, produced continuous observations of 0-3 cm and 3-6 cm soil water content. The impedance probe measurements themselves were calibrated with gravimetric samples and bulk density measurements. Two different temperature corrections were applied to the TDR measurements. Each method fit the data well and it was impossible to determine the true temperature correction. Both temperature corrections were considered in the final analysis. We found that there is useful radiometric sensitivity to soil moisture at 1.4 GHz through a corn canopy of vegetation column density 8 kg m-2 (water column density of 6.3 kg m-2). The magnitude of the measured sensitivity of horizontally-polarized brightness temperature to the 0-3 cm volumetric soil water content was at least 1.5 K per 0.01 m3 m-3, and could have been as high as 2.5 K per 0.01 m

  11. Sensitivity of palaeotidal models of the northwest European shelf seas to glacial isostatic adjustment since the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto

    2016-11-01

    The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.

  12. TiO2 dye sensitized solar cell (DSSC): linear relationship of maximum power point and anthocyanin concentration

    NASA Astrophysics Data System (ADS)

    Ahmadian, Radin

    2010-09-01

    This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.

  13. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  14. Modified surface loading process for achieving improved performance of the quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Jin, Zhongxiu; Zhu, Jun; Xu, Yafeng; Zhou, Li; Dai, Songyuan

    2016-06-01

    Achieving high surface coverage of the colloidal quantum dots (QDs) on TiO2 films has been challenging for quantum dot-sensitized solar cells (QDSCs). Herein, a general surface engineering approach was proposed to increase the loading of these QDs. It was found that S2- treatment/QD re-uptake process can significantly improve the attachment of the QDs on TiO2 films. Surface concentration of the QDs was improved by ∼60%, which in turn greatly enhances light absorption and decreases carrier recombination in QDSCs. Ensuing QDSCs with optimized QD loading exhibit a power conversion efficiency of 3.66%, 83% higher than those fabricated with standard procedures.

  15. A Study to Assess the Achievement Motivation of Higher Secondary Students in Relation to Their Noise Sensitivity

    ERIC Educational Resources Information Center

    Latha, Prema

    2014-01-01

    Disturbing sounds are often referred to as noise, and if extreme enough in degree, intensity or frequency, it is referred to as noise pollution. Achievement refers to a change in study behavior in relation to their noise sensitivity and learning in the educational sense by achieving results in changed responses to certain types of stimuli like…

  16. Relationship of Gender and Academic Achievement to Finnish Students' Intercultural Sensitivity

    ERIC Educational Resources Information Center

    Holm, Kristiina; Nokelainen, Petri; Tirri, Kirsi

    2009-01-01

    This study examined the intercultural sensitivity of Finnish 12-16-year-old secondary school students (N=549) with a 23-item Intercultural Sensitivity Scale Questionnaire (ICSSQ). The ICSSQ is based on Bennett's (1993) Developmental Model of Intercultural Sensitivity (DMIS), which is a conceptual tool to situate certain reactions towards cultural…

  17. Social class and academic achievement in college: the interplay of rejection sensitivity and entity beliefs.

    PubMed

    Rheinschmidt, Michelle L; Mendoza-Denton, Rodolfo

    2014-07-01

    Undergraduates, especially those from lower income backgrounds, may perceive their social class background as different or disadvantaged relative to that of peers and worry about negative social treatment. We hypothesized that concerns about discrimination based on one's social class (i.e., class-based rejection sensitivity or RS-class) would be damaging to undergraduates' achievement outcomes particularly among entity theorists, who perceive their personal characteristics as fixed. We reasoned that a perceived capacity for personal growth and change, characteristic of incremental theorists, would make the pursuit of a college degree and upward mobility seem more worthwhile and attainable. We found evidence across 3 studies that dispositionally held and experimentally primed entity (vs. incremental) beliefs predicted college academic performance as a function of RS-class. Studies 1a and 1b documented that high levels of both entity beliefs and RS-class predicted lower self-reported and official grades, respectively, among undergraduates from socioeconomically diverse backgrounds. In Study 2, high entity beliefs and RS-class at matriculation predicted decreased year-end official grades among lower class Latino students. Study 3 established the causal relationship of entity (vs. incremental) beliefs on academic test performance as a function of RS-class. We observed worse test performance with higher RS-class levels following an entity (vs. incremental) prime, an effect driven by lower income students. Findings from a 4th study suggest that entity theorists with RS-class concerns tend to believe less in upward mobility and, following academic setbacks, are prone to personal attributions of failure, as well as hopelessness. Implications for education and intervention are discussed.

  18. Probabilistic tsunami hazard assessment for Makran considering recently suggested larger maximum magnitudes and sensitivity analysis for GNSS-based early warning

    NASA Astrophysics Data System (ADS)

    Zamora, N.; Hoechner, A.; Babeyko, A. Y.

    2014-12-01

    Iran and Pakistan are countries frequently affected by destructive earthquakes, as for instance, the magnitude 6.6 Bam earthquake in 2003 in Iran with about 30 000 casualties, or the magnitude 7.6 Kashmir earthquake 2005 in Pakistan with about 80'000 casualties. Both events took place inland, but in terms of magnitude, even significantly larger events can be expected to happen offshore, at the Makran subduction zone. This small subduction zone is seismically rather quiescent, nevertheless a tsunami caused by a thrust event in 1945 (Balochistan earthquake) led to about 4000 casualties. Nowadays, the coastal regions are more densely populated and vulnerable to similar events. Furthermore, some recent publications discuss the possiblity of rather rare huge magnitude 9 events at the Makran subduction zone. We analyze the seismicity at the subduction plate interface and generate various synthetic earthquake catalogs spanning 100000 years. All the events are projected onto the plate interface using scaling relations and a tsunami model is run for every scenario. The tsunami hazard along the coast is computed and presented in the form of annual probability of exceedance, probabilistic tsunami height for different time periods and other measures. We show how the hazard reacts to variation of the Gutenberg-Richter parameters and maximum magnitudes.We model the historic Balochistan event and its effect in terms of coastal wave heights. Finally, we show how an effective tsunami early warning could be achieved by using an array of high-precision real-time GNSS (Global Navigation Satellite System) receivers along the coast by applying it to the 1945 event and by performing a sensitivity analysis.

  19. Systematic approach to determination of maximum achievable capture capacity via leaching and carbonation processes for alkaline steelmaking wastes in a rotating packed bed.

    PubMed

    Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E

    2013-01-01

    Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB. PMID:24236803

  20. Systematic approach to determination of maximum achievable capture capacity via leaching and carbonation processes for alkaline steelmaking wastes in a rotating packed bed.

    PubMed

    Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E

    2013-01-01

    Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB.

  1. Rice yields in tropical/subtropical Asia exhibit large but opposing sensitivities to minimum and maximum temperatures

    PubMed Central

    Welch, Jarrod R.; Vincent, Jeffrey R.; Auffhammer, Maximilian; Moya, Piedad F.; Dobermann, Achim; Dawe, David

    2010-01-01

    Data from farmer-managed fields have not been used previously to disentangle the impacts of daily minimum and maximum temperatures and solar radiation on rice yields in tropical/subtropical Asia. We used a multiple regression model to analyze data from 227 intensively managed irrigated rice farms in six important rice-producing countries. The farm-level detail, observed over multiple growing seasons, enabled us to construct farm-specific weather variables, control for unobserved factors that either were unique to each farm but did not vary over time or were common to all farms at a given site but varied by season and year, and obtain more precise estimates by including farm- and site-specific economic variables. Temperature and radiation had statistically significant impacts during both the vegetative and ripening phases of the rice plant. Higher minimum temperature reduced yield, whereas higher maximum temperature raised it; radiation impact varied by growth phase. Combined, these effects imply that yield at most sites would have grown more rapidly during the high-yielding season but less rapidly during the low-yielding season if observed temperature and radiation trends at the end of the 20th century had not occurred, with temperature trends being more influential. Looking ahead, they imply a net negative impact on yield from moderate warming in coming decades. Beyond that, the impact would likely become more negative, because prior research indicates that the impact of maximum temperature becomes negative at higher levels. Diurnal temperature variation must be considered when investigating the impacts of climate change on irrigated rice in Asia. PMID:20696908

  2. Experimental study of the maximum resolution and packing density achievable in sintered and non-sintered binder-jet 3D printed steel microchannels

    SciTech Connect

    Elliott, Amy M; Mehdizadeh Momen, Ayyoub; Benedict, Michael; Kiggans Jr, James O

    2015-01-01

    Developing high resolution 3D printed metallic microchannels is a challenge especially when there is an essential need for high packing density of the primary material. While high packing density could be achieved by heating the structure to the sintering temperature, some heat sensitive applications require other strategies to improve the packing density of primary materials. In this study the goal is to develop high green or pack densities microchannels on the scale of 2-300 microns which have a robust mechanical structure. Binder-jet 3D printing is an additive manufacturing process in which droplets of binder are deposited via inkjet into a bed of powder. By repeatedly spreading thin layers of powder and depositing binder into the appropriate 2D profiles, complex 3D objects can be created one layer at time. Microchannels with features on the order of 500 microns were fabricated via binder jetting of steel powder and then sintered and/or infiltrated with a secondary material. The average particle size of the steel powder was varied along with the droplet volume of the inkjet-deposited binder. The resolution of the process, packing density of the primary material, the subsequent features sizes of the microchannels, and the overall microchannel quality were characterized as a function of particle size distribution, droplet sizes and heat treatment temperatures.

  3. Optimizing Bi2O3 and TiO2 to achieve the maximum non-linear electrical property of ZnO low voltage varistor

    PubMed Central

    2013-01-01

    Background In fabrication of ZnO-based low voltage varistor, Bi2O3 and TiO2 have been used as former and grain growth enhancer factors respectively. Therefore, the molar ratio of the factors is quit important in the fabrication. In this paper, modeling and optimization of Bi2O3 and TiO2 was carried out by response surface methodology to achieve maximized electrical properties. The fabrication was planned by central composite design using two variables and one response. To obtain actual responses, the design was performed in laboratory by the conventional methods of ceramics fabrication. The actual responses were fitted into a valid second order algebraic polynomial equation. Then the quadratic model was suggested by response surface methodology. The model was validated by analysis of variance which provided several evidences such as high F-value (153.6), very low P-value (<0.0001), adjusted R-squared (0.985) and predicted R-squared (0.947). Moreover, the lack of fit was not significant which means the model was significant. Results The model tracked the optimum of the additives in the design by using three dimension surface plots. In the optimum condition, the molars ratio of Bi2O3 and TiO2 were obtained in a surface area around 1.25 point that maximized the nonlinear coefficient around 20 point. Moreover, the model predicted the optimum amount of the additives in desirable condition. In this case, the condition included minimum standard error (0.35) and maximum nonlinearity (20.03), while molar ratio of Bi2O3 (1.24 mol%) and TiO2 (1.27 mol%) was in range. The condition as a solution was tested by further experiments for confirmation. As the experimental results showed, the obtained value of the non-linearity, 21.6, was quite close to the predicted model. Conclusion Response surface methodology has been successful for modeling and optimizing the additives such as Bi2O3 and TiO2 of ZnO-based low voltage varistor to achieve maximized non-linearity properties. PMID

  4. Reward sensitivity: issues of measurement, and achieving consilience between human and animal phenotypes.

    PubMed

    Stephens, David N; Duka, Theodora; Crombag, Hans S; Cunningham, Christopher L; Heilig, Markus; Crabbe, John C

    2010-04-01

    Reward is a concept fundamental to discussions of drug abuse and addiction. The idea that altered sensitivity to either drug-reward, or to rewards in general, contributes to, or results from, drug-taking is a common theme in several theories of addiction. However, the concept of reward is problematic in that it is used to refer to apparently different behavioural phenomena, and even to diverse neurobiological processes (reward pathways). Whether these different phenomena are different behavioural expressions of a common underlying process is not established, and much research suggests that there may be only loose relationships among different aspects of reward. Measures of rewarding effects of drugs in humans often depend upon subjective reports. In animal studies, such insights are not available, and behavioural measures must be relied upon to infer rewarding effects of drugs or other events. In such animal studies, but also in many human methods established to objectify measures of reward, many other factors contribute to the behaviour being studied. For that reason, studying the biological (including genetic) bases of performance of tasks that ostensibly measure reward cannot provide unequivocal answers. The current overview outlines the strengths and weaknesses of current approaches that hinder the conciliation of cross-species studies of the genetics of reward sensitivity and the dysregulation of reward processes by drugs of abuse. Some suggestions are made as to how human and animal studies may be made to address more closely homologous behaviours, even if those processes are only partly able to isolate 'reward' from other factors contributing to behavioural output.

  5. Sensitivity and noise in GC-MS: Achieving low limits of detection for difficult analytes

    NASA Astrophysics Data System (ADS)

    Fialkov, Alexander B.; Steiner, Urs; Lehotay, Steven J.; Amirav, Aviv

    2007-01-01

    Gas chromatography-mass spectrometry (GC-MS) instrument limit of detection (LOD) is typically listed by major vendors as that of octafluoronaphthalene (OFN). Most current GC-MS instruments can achieve LODs in the low femtogram range. However, GC-MS LODs for realistic analytes in actual samples are often a few orders of magnitude higher than OFN's. Users seldom encounter 1 pg LOD in the single ion monitoring mode in their applications. We define this detectability difference as the "OFN gap." In this paper, we demonstrate and discuss how the OFN gap can be significantly reduced by the use of GC-MS with supersonic molecular beams (SMB). Experimental results were obtained with a recently developed GC-MS with SMB named 1200-SMB, that is based on the conversion of the Varian 1200 system into a GC-MS-MS with SMB. With this 1200-SMB system, the LOD of all types of analytes, including OFN, in real samples is significantly improved through the combination of: (a) enhanced molecular ion; (b) elimination of vacuum background noise; (c) elimination of mass independent noise; (d) elimination of ion source peak tailing and degradation; (e) significantly increased range of thermally labile and low volatility compounds that are amenable for analysis through lower sample elution temperatures; (f) reduced column bleed and ghost peaks through sample elution at lower temperatures; (g) improved compatibility with large volume injections; and (h) reduced matrix interferences through the combination of enhanced molecular ion and MS-MS. As a result, the 1200-SMB LODs of common and/or difficult compounds are much closer to its OFN LOD, even in complex matrices. We crossed the <1 fg OFN LOD milestone to achieve the lowest LOD to date using GC-MS, but more importantly, we attained LOD of 2 fg for diazinon, a common pesticide analyte. In another example, we achieved an LOD of 10 fg for underivatized testosterone, which is not amenable in traditional GC-MS analysis, and conducted many analyses

  6. Effects of low levels of road traffic noise during the night: a laboratory study on number of events, maximum noise levels and noise sensitivity

    NASA Astrophysics Data System (ADS)

    Öhrström, E.

    1995-01-01

    The objective of the laboratory study presented here was to elucidate the importance of the number of noise events of a relatively low maximum noise level for sleep disturbance effects (body movements, subjective sleep quality, mood and performance). Twelve test persons slept eight nights under home-like laboratory settings. During four of these nights, each test person was exposed to 16, 32, 64 and 128 noise events respectively from recorded road traffic noise at a maximum noise level of 45 dB(A). All test persons (aged 20-42 years) considered themselves rather or very sensitive towards noise. The results show a significant decrease in subjective sleep quality at 32 noise events per night. At 64 noise events, 50% of the test persons experienced difficulties in falling asleep and, as compared with quiet nights, the time required to fall asleep was on average 12 minutes longer. The occurrence of body movements was significantly related to the reported number of awakenings, and the number of body movements was three times higher during the noisy periods of the night as compared with the quiet periods, indicating acute noise effects. The results of a vigilance test indicate that noise during the night might prolong the time needed to solve the test. Finally, and regardless of number of noise events, a significant increase in tiredness during the day was found after nights with noise exposure. In the paper comparisons are also made with earlier experiments using maximum noise levels of 50 and 60 dB(A).

  7. Achieving effective terminal exciton delivery in quantum dot antenna-sensitized multistep DNA photonic wires.

    PubMed

    Spillmann, Christopher M; Ancona, Mario G; Buckhout-White, Susan; Algar, W Russ; Stewart, Michael H; Susumu, Kimihiro; Huston, Alan L; Goldman, Ellen R; Medintz, Igor L

    2013-08-27

    Assembling DNA-based photonic wires around semiconductor quantum dots (QDs) creates optically active hybrid architectures that exploit the unique properties of both components. DNA hybridization allows positioning of multiple, carefully arranged fluorophores that can engage in sequential energy transfer steps while the QDs provide a superior energy harvesting antenna capacity that drives a Förster resonance energy transfer (FRET) cascade through the structures. Although the first generation of these composites demonstrated four-sequential energy transfer steps across a distance >150 Å, the exciton transfer efficiency reaching the final, terminal dye was estimated to be only ~0.7% with no concomitant sensitized emission observed. Had the terminal Cy7 dye utilized in that construct provided a sensitized emission, we estimate that this would have equated to an overall end-to-end ET efficiency of ≤ 0.1%. In this report, we demonstrate that overall energy flow through a second generation hybrid architecture can be significantly improved by reengineering four key aspects of the composite structure: (1) making the initial DNA modification chemistry smaller and more facile to implement, (2) optimizing donor-acceptor dye pairings, (3) varying donor-acceptor dye spacing as a function of the Förster distance R0, and (4) increasing the number of DNA wires displayed around each central QD donor. These cumulative changes lead to a 2 orders of magnitude improvement in the exciton transfer efficiency to the final terminal dye in comparison to the first-generation construct. The overall end-to-end efficiency through the optimized, five-fluorophore/four-step cascaded energy transfer system now approaches 10%. The results are analyzed using Förster theory with various sources of randomness accounted for by averaging over ensembles of modeled constructs. Fits to the spectra suggest near-ideal behavior when the photonic wires have two sequential acceptor dyes (Cy3 and Cy3.5) and

  8. Maximum Jailbreak

    NASA Astrophysics Data System (ADS)

    Singleton, B.

    First formulated one hundred and fifty years ago by the heretical scholar Nikolai Federov, the doctrine of cosmism begins with an absolute refusal to treat the most basic factors conditioning life on Earth ­ gravity and death ­ as necessary constraints on action. As manifest through the intoxicated cheers of its early advocates that humans should storm the heavens and conquer death, cosmism's foundational gesture was to conceive of the earth as a trap. Its duty was therefore to understand the duty of philosophy, economics and design to be the creation of means to escape it. This could be regarded as a jailbreak at the maximum possible scale, a heist in which the human species could steal itself from the vault of the Earth. After several decades of relative disinterest new space ventures are inspiring scientific, technological and popular imaginations, this essay explores what kind of cosmism might be constructed today. In this paper cosmism's position as a means of escape is both reviewed and evaluated by reflecting on the potential of technology that actually can help us achieve its aims and also through the lens and state-ofthe-art philosophy of accelerationism, which seeks to outrun modern tropes by intensifying them.

  9. On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Nakshatrala, K. B.

    2016-01-01

    We present a robust computational framework for advective-diffusive-reactive systems that satisfies maximum principles, the non-negative constraint, and element-wise species balance property. The proposed methodology is valid on general computational grids, can handle heterogeneous anisotropic media, and provides accurate numerical solutions even for very high Péclet numbers. The significant contribution of this paper is to incorporate advection (which makes the spatial part of the differential operator non-self-adjoint) into the non-negative computational framework, and overcome numerical challenges associated with advection. We employ low-order mixed finite element formulations based on least-squares formalism, and enforce explicit constraints on the discrete problem to meet the desired properties. The resulting constrained discrete problem belongs to convex quadratic programming for which a unique solution exists. Maximum principles and the non-negative constraint give rise to bound constraints while element-wise species balance gives rise to equality constraints. The resulting convex quadratic programming problems are solved using an interior-point algorithm. Several numerical results pertaining to advection-dominated problems are presented to illustrate the robustness, convergence, and the overall performance of the proposed computational framework.

  10. Maximum likelihood.

    PubMed

    Yang, Shuying; De Angelis, Daniela

    2013-01-01

    The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.

  11. Pre-Type 1 Diabetes Dysmetabolism: Maximal sensitivity achieved with Both Oral and Intravenous Glucose Tolerance Testing

    PubMed Central

    Barker, Jennifer M.; McFann, Kim; Harrison, Leonard C.; Fourlanos, Spiros; Krischer, Jeffrey; Cuthbertson, David; Chase, H. Peter; Eisenbarth, George S.; Group, the DPT-1 Study

    2007-01-01

    Objective To determine the relationship of intravenous (IVGTT) and oral (OGTT) glucose tolerance tests abnormalities to diabetes development in a high-risk pre-diabetic cohort and identify an optimal testing strategy for detecting pre-clinical diabetes. Study design Diabetes Prevention Trial Type 1 randomized subjects to oral (n=372) and parenteral (n=339) insulin prevention trials. Subjects were followed with IVGTTs and OGTTs. Factors associated with progression to diabetes were evaluated. Results Survival analysis revealed that higher quartiles of 2-hour glucose and lower quartiles of FPIR at baseline were associated with decreased diabetes-free survival. Cox proportional hazards modeling showed that baseline BMI, FPIR and 2-hour glucose levels were significantly associated with an increased hazard for diabetes. On testing performed within 6 months of diabetes diagnosis, 3% (1/32) had normal first phase insulin response (FPIR) and normal 2-hour glucose on OGTT. The sensitivities for impaired glucose tolerance (IGT) and low FPIR performed within 6 months of diabetes diagnosis were equivalent (76% vs. 73%). Conclusions Most (97%) subjects had abnormal IVGTTs and/or OGTTs prior to the development of diabetes. The highest sensitivity is achieved using both tests. PMID:17188609

  12. The Achievement of Therapeutic Objectives Scale: Interrater Reliability and Sensitivity to Change in Short-Term Dynamic Psychotherapy and Cognitive Therapy

    ERIC Educational Resources Information Center

    Valen, Jakob; Ryum, Truls; Svartberg, Martin; Stiles, Tore C.; McCullough, Leigh

    2011-01-01

    This study examined interrater reliability and sensitivity to change of the Achievement of Therapeutic Objectives Scale (ATOS; McCullough, Larsen, et al., 2003) in short-term dynamic psychotherapy (STDP) and cognitive therapy (CT). The ATOS is a process scale originally developed to assess patients' achievements of treatment objectives in STDP,…

  13. A new coupled ice sheet-climate model: description and sensitivity to model physics under Eemian, Last Glacial Maximum, late Holocene and modern climate conditions

    NASA Astrophysics Data System (ADS)

    Fyke, J. G.; Weaver, A. J.; Pollard, D.; Eby, M.; Carter, L.; Mackintosh, A.

    2010-08-01

    The need to better understand long-term climate/ice sheet feedback loops is motivating efforts to couple ice sheet models into Earth System models which are capable of long-timescale simulations. In this paper we describe a coupled model, that consists of the University of Victoria Earth System Climate Model (UVic ESCM) and the Pennsylvania State University Ice model (PSUI). The climate model generates a surface mass balance (SMB) field via a sub-gridded surface energy/moisture balance model that resolves narrow ice sheet ablation zones. The ice model returns revised elevation, surface albedo and ice area fields, plus coastal fluxes of heat and moisture. An arbitrary number of ice sheets can be simulated, each on their own high-resolution grid and each capable of synchronous or asynchronous coupling with the overlying climate model. The model is designed to conserve global heat and moisture. In the process of improving model performance we developed a procedure to account for modelled surface air temperature (SAT) biases within the energy/moisture balance surface model and improved the UVic ESCM snow surface scheme through addition of variable albedos and refreezing over the ice sheet. A number of simulations for late Holocene, Last Glacial Maximum (LGM), and Eemian climate boundary conditions were carried out to explore the sensitivity of the coupled model and identify model configurations that best represented these climate states. The modelled SAT bias was found to play a significant role in long-term ice sheet evolution, as was the effect of refreezing meltwater and surface albedo. The bias-corrected model was able to reasonably capture important aspects of the Antarctic and Greenland ice sheets, including modern SMB and ice distribution. The simulated northern Greenland ice sheet was found to be prone to ice margin retreat at radiative forcings corresponding closely to those of the Eemian or the present-day.

  14. A new coupled ice sheet/climate model: description and sensitivity to model physics under Eemian, Last Glacial Maximum, late Holocene and modern climate conditions

    NASA Astrophysics Data System (ADS)

    Fyke, J. G.; Weaver, A. J.; Pollard, D.; Eby, M.; Carter, L.; Mackintosh, A.

    2011-03-01

    The need to better understand long-term climate/ice sheet feedback loops is motivating efforts to couple ice sheet models into Earth System models which are capable of long-timescale simulations. In this paper we describe a coupled model that consists of the University of Victoria Earth System Climate Model (UVic ESCM) and the Pennsylvania State University Ice model (PSUI). The climate model generates a surface mass balance (SMB) field via a sub-gridded surface energy/moisture balance model that resolves narrow ice sheet ablation zones. The ice model returns revised elevation, surface albedo and ice area fields, plus coastal fluxes of heat and moisture. An arbitrary number of ice sheets can be simulated, each on their own high-resolution grid and each capable of synchronous or asynchronous coupling with the overlying climate model. The model is designed to conserve global heat and moisture. In the process of improving model performance we developed a procedure to account for modelled surface air temperature (SAT) biases within the energy/moisture balance surface model and improved the UVic ESCM snow surface scheme through addition of variable albedos and refreezing over the ice sheet. A number of simulations for late Holocene, Last Glacial Maximum (LGM), and Eemian climate boundary conditions were carried out to explore the sensitivity of the coupled model and identify model configurations that best represented these climate states. The modelled SAT bias was found to play a significant role in long-term ice sheet evolution, as was the effect of refreezing meltwater and surface albedo. The bias-corrected model was able to reasonably capture important aspects of the Antarctic and Greenland ice sheets, including modern SMB and ice distribution. The simulated northern Greenland ice sheet was found to be prone to ice margin retreat at radiative forcings corresponding closely to those of the Eemian or the present-day.

  15. The Application of a New Maximum Color Contrast Sensitivity Test to the Early Prediction of Chiasma Damage in Cases of Pituitary Adenoma: The Pilot Study

    PubMed Central

    Liutkeviciene, Rasa; Glebauskiene, Brigita; Zaliuniene, Dalia; Kriauciuniene, Loresa; Bernotas, Giedrimantas; Tamasauskas, Arimantas

    2016-01-01

    Purpose Our objective was to estimate the maximum color contrast sensitivity (MCCS) thresholds in individuals with chiasma opticum damage. Methods The pilot study tested 41 people with pituitary adenoma (PA) and 100 age- and gender-matched controls. Patients were divided into two groups according to PA size, PA ≤1 cm or PA >1 cm. A new MCCS test program was used for color discrimination. Results The mean total error score (TES) of MCCS was 1.8 in the PA ≤1 cm group (standard deviation [SD], 0.38), 3.5 in the PA >1 cm group (SD, 0.96), and 1.4 in the control group (SD, 0.31; p < 0.001). There was a positive correlation between tumor size and MCCS result (r = 0.648, p < 0.01). In the group that had PA-producing hormones, the TES was 2.5 (SD, 1.09), compared to 4.2 value in the non-functioning PA group of patients that did not have clinically significant hormone excess (SD, 3.16; p < 0.01). In patients with normal visual acuity (VA) or visual field MCCS, the TES was 3.3 (SD, 1.8), while that in patients with VA <0.00 was 4.6 (SD, 2.9). Conclusions Results of the MCCS test TES were 1.9 times better in patients with PA ≤1 cm compared to patients with PA >1 cm (p < 0.01). In PA patients with normal VA, the TES was 2.35 times worse than that of healthy persons (p < 0.01). PMID:27478357

  16. Achieving Maximum Integration Utilizing Requirements Flow Down

    NASA Technical Reports Server (NTRS)

    Archiable, Wes; Askins, Bruce

    2011-01-01

    A robust and experienced systems engineering team is essential for a successful program. It is often a challenge to build a core systems engineering team early enough in a program to maximize integration and assure a common path for all supporting teams in a project. Ares I was no exception. During the planning of IVGVT, the team had many challenges including lack of: early identification of stakeholders, team training in NASA s system engineering practices, solid requirements flow down and a top down documentation strategy. The IVGVT team started test planning early in the program before the systems engineering framework had been matured due to an aggressive schedule. Therefore the IVGVT team increased their involvement in the Constellation systems engineering effort. Program level requirements were established that flowed down to IVGVT aligning all stakeholders to a common set of goals. The IVGVT team utilized the APPEL REQ Development Management course providing the team a NASA focused model to follow. The IVGVT team engaged directly with the model verification and validation process to assure that a solid set of requirements drove the need for the test event. The IVGVT team looked at the initial planning state, analyzed the current state and then produced recommendations for the ideal future state of a wide range of systems engineering functions and processes. Based on this analysis, the IVGVT team was able to produce a set of lessons learned and to provide suggestions for future programs or tests to use in their initial planning phase.

  17. Estimating insulin sensitivity from glucose levels only: Use of a non-linear mixed effects approach and maximum a posteriori (MAP) estimation.

    PubMed

    Yates, James W T; Watson, Edmund M

    2013-02-01

    Insulin Sensitivity is an important parameter for the management of Diabetes. It can be derived for a particular patient using data derived from some glucose challenge tests using measured glucose and insulin levels at various times. Whilst a useful approach, deriving insulin sensitivities to inform insulin dosing in other settings such as Intensive Care Units can be more challenging - especially as insulin levels have to be assayed in a laboratory, not at the bedside. This paper investigates an approach to measure insulin sensitivity from glucose levels only. Estimates of mean and between individual parameter variances are used to derive conditional estimates of insulin sensitivity. The method is demonstrated to perform reasonably well, with conditional estimates comparing well with estimates derived from insulin data as well. PMID:22244505

  18. Estimating insulin sensitivity from glucose levels only: Use of a non-linear mixed effects approach and maximum a posteriori (MAP) estimation.

    PubMed

    Yates, James W T; Watson, Edmund M

    2013-02-01

    Insulin Sensitivity is an important parameter for the management of Diabetes. It can be derived for a particular patient using data derived from some glucose challenge tests using measured glucose and insulin levels at various times. Whilst a useful approach, deriving insulin sensitivities to inform insulin dosing in other settings such as Intensive Care Units can be more challenging - especially as insulin levels have to be assayed in a laboratory, not at the bedside. This paper investigates an approach to measure insulin sensitivity from glucose levels only. Estimates of mean and between individual parameter variances are used to derive conditional estimates of insulin sensitivity. The method is demonstrated to perform reasonably well, with conditional estimates comparing well with estimates derived from insulin data as well.

  19. Predictive models of lameness in dairy cows achieve high sensitivity and specificity with force measurements in three dimensions.

    PubMed

    Dunthorn, Jason; Dyer, Robert M; Neerchal, Nagaraj K; McHenry, Jonathan S; Rajkondawar, Parimal G; Steingraber, Gary; Tasch, Uri

    2015-11-01

    Lameness remains a significant cause of production losses, a growing welfare concern and may be a greater economic burden than clinical mastitis . A growing need for accurate, continuous automated detection systems continues because US prevalence of lameness is 12.5% while individual herds may experience prevalence's of 27.8-50.8%. To that end the first force-plate system restricted to the vertical dimension identified lame cows with 85% specificity and 52% sensitivity. These results lead to the hypothesis that addition of transverse and longitudinal dimensions could improve sensitivity of lameness detection. To address the hypothesis we upgraded the original force plate system to measure ground reaction forces (GRFs) across three directions. GRFs and locomotion scores were generated from randomly selected cows and logistic regression was used to develop a model that characterised relationships of locomotion scores to the GRFs. This preliminary study showed 76 variables across 3 dimensions produced a model with greater than 90% sensitivity, specificity, and area under the receiver operating curve (AUC). The result was a marked improvement on the 52% sensitivity, and 85% specificity previously observed with the 1 dimensional model or the 45% sensitivities reported with visual observations. Validation of model accuracy continues with the goal to finalise accurate automated methods of lameness detection. PMID:26278403

  20. Enhanced Conversion Efficiencies in Dye-Sensitized Solar Cells Achieved through Self-Assembled Platinum(II) Metallacages

    NASA Astrophysics Data System (ADS)

    He, Zuoli; Hou, Zhiqiang; Xing, Yonglei; Liu, Xiaobin; Yin, Xingtian; Que, Meidan; Shao, Jinyou; Que, Wenxiu; Stang, Peter J.

    2016-07-01

    Two-component self-assembly supramolecular coordination complexes with particular photo-physical property, wherein unique donors are combined with a single metal acceptor, can be utilized for many applications including in photo-devices. In this communication, we described the synthesis and characterization of two-component self-assembly supramolecular coordination complexes (SCCs) bearing triazine and porphyrin faces with promising light-harvesting properties. These complexes were obtained from the self-assembly of a 90° Pt(II) acceptor with 2,4,6-tris(4-pyridyl)-1,3,5-triazine (TPyT) or 5,10,15,20-Tetra(4-pyridyl)-21H,23H-porphine (TPyP). The greatly improved conversion efficiencies of the dye-sensitized TiO2 solar cells were 6.79 and 6.08 respectively, while these SCCs were introduced into the TiO2 nanoparticle film photoanodes. In addition, the open circuit voltage (Voc) of dye-sensitized solar cells was also increased to 0.769 and 0.768 V, which could be ascribed to the inhibited interfacial charge recombination due to the addition of SCCs.

  1. Sensitivity improvement of an electrical sensor achieved by control of biomolecules based on the negative dielectrophoretic force.

    PubMed

    Kim, Hye Jin; Kim, Jinsik; Yoo, Yong Kyoung; Lee, Jeong Hoon; Park, Jung Ho; Hwang, Kyo Seon

    2016-11-15

    Effective control of nano-scale biomolecules can enhance the sensitivity and limit of detection of an interdigitated microelectrode (IME) sensor. Manipulation of the biomolecules by dielectrophoresis (DEP), especially the negative DEP (nDEP) force, so that they are trapped between electrodes (sensing regions) was predicted to increase the binding efficiency of the antibody and target molecules, leading to a more effective reaction. To prove this concept, amyloid beta 42 (Aβ42) and prostate specific antigen (PSA) protein were respectively trapped between the sensing region owing to the nDEP force under 5V and 0.05V, which was verified with COMSOL simulation. Using the simulation value, the resistance change (ΔR/Rb) of the IME sensor from the specific antibody-antigen reaction of the two biomolecules and the change in fluorescence intensity were compared in the reference (pDEP) and nDEP conditions. The ΔR/Rb value improved by about 2-fold and 1.66-fold with nDEP compared to the reference condition with various protein concentrations, and these increases were confirmed with fluorescence imaging. Overall, nDEP enhanced the detection sensitivity for Aβ42 and PSA by 128% and 258%, respectively, and the limit of detection improved by up to 2-orders of magnitude. These results prove that DEP can improve the biosensor's performance. PMID:27449966

  2. Enhanced Conversion Efficiencies in Dye-Sensitized Solar Cells Achieved through Self-Assembled Platinum(II) Metallacages

    PubMed Central

    He, Zuoli; Hou, Zhiqiang; Xing, Yonglei; Liu, Xiaobin; Yin, Xingtian; Que, Meidan; Shao, Jinyou; Que, Wenxiu; Stang, Peter J.

    2016-01-01

    Two-component self-assembly supramolecular coordination complexes with particular photo-physical property, wherein unique donors are combined with a single metal acceptor, can be utilized for many applications including in photo-devices. In this communication, we described the synthesis and characterization of two-component self-assembly supramolecular coordination complexes (SCCs) bearing triazine and porphyrin faces with promising light-harvesting properties. These complexes were obtained from the self-assembly of a 90° Pt(II) acceptor with 2,4,6-tris(4-pyridyl)-1,3,5-triazine (TPyT) or 5,10,15,20-Tetra(4-pyridyl)-21H,23H-porphine (TPyP). The greatly improved conversion efficiencies of the dye-sensitized TiO2 solar cells were 6.79 and 6.08 respectively, while these SCCs were introduced into the TiO2 nanoparticle film photoanodes. In addition, the open circuit voltage (Voc) of dye-sensitized solar cells was also increased to 0.769 and 0.768 V, which could be ascribed to the inhibited interfacial charge recombination due to the addition of SCCs. PMID:27404912

  3. Nad(P)H vs. Schiff base fluorescence by spectroscopy, imaging, and maximum sensitivity micrographs at the convergence of cellular detoxification, senescence, and transformation

    NASA Astrophysics Data System (ADS)

    Kohen, Elli; Hirschberg, Joseph G.; Kohen, Cahide; Monti, Marco

    1999-05-01

    Two intracellular fluorochromes, NAD(P)H and Schiff Bases, provide monitoring of energy metabolism and photoperoxidations. Fluorochrome spectra and topographic distribution are measured in a microspectrofluorometer, pixel by pixel using a CCD. The mitochondrial arrangement of Saccharomyces cerevisie and metabolic activity at nuclear kidney epithelial sites is revealed. A kind of accelerated photoaging results in the accumulation of Schiff pigment. Schiff base emission is red-shifted, and it may be preceded by photo-oxidation of NAD(P)H. UVA production of oxygen radicals and peroxides may influence detoxification, senescence and/or transformation. Besides lysosomes, mitochondrial energy metabolism and ER and Golgi detoxification are open to study as multi-organelle complexes with fluorescent xenobiotics and probes. Melanocytes vs. melanoma cells in culture will be investigated using a new compact interferometer for Fourier coding of both emission and excitation spectra. Surprisingly, the photographic method, using the highest sensitivity films, may sometimes produce excellent structural detail. However, for kinetic studies, the CCD, or equivalent, is required. There is good potential for applications in diagnostics and prognostics plus the evaluation of new biopharmeceuticals.

  4. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    SciTech Connect

    Driscoll, Donald D.; /Case Western Reserve U.

    2004-01-01

    first use of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  5. Direct Coupling of Solid-Phase Microextraction with Mass Spectrometry: Sub-pg/g Sensitivity Achieved Using a Dielectric Barrier Discharge Ionization Source.

    PubMed

    Mirabelli, Mario F; Wolf, Jan-Christoph; Zenobi, Renato

    2016-07-19

    We report a new strategy for the direct coupling of Solid-Phase Microextraction (SPME) with mass spectrometry, based on thermal desorption of analytes extracted on the fibers, followed by ionization by a dielectric barrier discharge ionization (DBDI) source. Limits of detection as low as 0.3 pg/mL and a linear dynamic range of ≥3 orders of magnitude were achieved, with a very simple and reproducible approach. Different from direct analysis in real time (DART), desorption electrospray ionization (DESI), or low-temperature plasma (LTP), the desorption of the analytes from the SPME devices in our setup is completely separated from the ionization event. This enhances the reproducibility of the method and minimizes ion suppression phenomena. The analytes were quantitatively transferred from the SPME to the DBDI source, and the use of an active capillary ionization embodiment of the DBDI source greatly enhanced the ion transmission to the MS. This, together with the extraordinary sensitivity of DBDI, allowed subpg/mL sensitivities to be reached and to skip conventional and time-consuming chromatographic separation. PMID:27332082

  6. Efficient dye regeneration at low driving force achieved in triphenylamine dye LEG4 and TEMPO redox mediator based dye-sensitized solar cells.

    PubMed

    Yang, Wenxing; Vlachopoulos, Nick; Hao, Yan; Hagfeldt, Anders; Boschloo, Gerrit

    2015-06-28

    Minimizing the driving force required for the regeneration of oxidized dyes using redox mediators in an electrolyte is essential to further improve the open-circuit voltage and efficiency of dye-sensitized solar cells (DSSCs). Appropriate combinations of redox mediators and dye molecules should be explored to achieve this goal. Herein, we present a triphenylamine dye, LEG4, in combination with a TEMPO-based electrolyte in acetonitrile (E(0) = 0.89 V vs. NHE), reaching an efficiency of up to 5.4% under one sun illumination and 40% performance improvement compared to the previously and widely used indoline dye D149. The origin of this improvement was found to be the increased dye regeneration efficiency of LEG4 using the TEMPO redox mediator, which regenerated more than 80% of the oxidized dye with a driving force of only ∼0.2 eV. Detailed mechanistic studies further revealed that in addition to electron recombination to oxidized dyes, recombination of electrons from the conducting substrate and the mesoporous TiO2 film to the TEMPO(+) redox species in the electrolyte accounts for the reduced short circuit current, compared to the state-of-the-art cobalt tris(bipyridine) electrolyte system. The diffusion length of the TEMPO-electrolyte based DSSCs was determined to be ∼0.5 μm, which is smaller than the ∼2.8 μm found for cobalt-electrolyte based DSSCs. These results show the advantages of using LEG4 as a sensitizer, compared to previously record indoline dyes, in combination with a TEMPO-based electrolyte. The low driving force for efficient dye regeneration presented by these results shows the potential to further improve the power conversion efficiency (PCE) of DSSCs by utilizing redox couples and dyes with a minimal need of driving force for high regeneration yields.

  7. Efficient dye regeneration at low driving force achieved in triphenylamine dye LEG4 and TEMPO redox mediator based dye-sensitized solar cells.

    PubMed

    Yang, Wenxing; Vlachopoulos, Nick; Hao, Yan; Hagfeldt, Anders; Boschloo, Gerrit

    2015-06-28

    Minimizing the driving force required for the regeneration of oxidized dyes using redox mediators in an electrolyte is essential to further improve the open-circuit voltage and efficiency of dye-sensitized solar cells (DSSCs). Appropriate combinations of redox mediators and dye molecules should be explored to achieve this goal. Herein, we present a triphenylamine dye, LEG4, in combination with a TEMPO-based electrolyte in acetonitrile (E(0) = 0.89 V vs. NHE), reaching an efficiency of up to 5.4% under one sun illumination and 40% performance improvement compared to the previously and widely used indoline dye D149. The origin of this improvement was found to be the increased dye regeneration efficiency of LEG4 using the TEMPO redox mediator, which regenerated more than 80% of the oxidized dye with a driving force of only ∼0.2 eV. Detailed mechanistic studies further revealed that in addition to electron recombination to oxidized dyes, recombination of electrons from the conducting substrate and the mesoporous TiO2 film to the TEMPO(+) redox species in the electrolyte accounts for the reduced short circuit current, compared to the state-of-the-art cobalt tris(bipyridine) electrolyte system. The diffusion length of the TEMPO-electrolyte based DSSCs was determined to be ∼0.5 μm, which is smaller than the ∼2.8 μm found for cobalt-electrolyte based DSSCs. These results show the advantages of using LEG4 as a sensitizer, compared to previously record indoline dyes, in combination with a TEMPO-based electrolyte. The low driving force for efficient dye regeneration presented by these results shows the potential to further improve the power conversion efficiency (PCE) of DSSCs by utilizing redox couples and dyes with a minimal need of driving force for high regeneration yields. PMID:26016854

  8. Central sensitization does not identify patients with carpal tunnel syndrome who are likely to achieve short-term success with physical therapy.

    PubMed

    Fernández-de-Las-Peñas, César; Cleland, Joshua A; Ortega-Santiago, Ricardo; de-la-Llave-Rincon, Ana Isabel; Martínez-Perez, Almudena; Pareja, Juan A

    2010-11-01

    widespread central sensitization may not be present in women with CTS who are likely to achieve a successful outcome with physical therapy. Future studies are now necessary to validate these findings.

  9. Assessment of disease activity in treated acromegalic patients using a sensitive GH assay: should we achieve strict normal GH levels for a biochemical cure?

    PubMed

    Costa, Augusto C F; Rossi, Adriana; Martinelli, Carlos E; Machado, Hélio R; Moreira, Ayrton C

    2002-07-01

    The definition of a cure for acromegaly is controversial in the absence of a well-defined clinical end-point. Therefore, cure in acromegaly may be arbitrarily defined as a normalization of biochemical parameters. The accepted normal GH levels have been modified over time with the improved sensitivity of GH assays. The objective of the present study was to investigate the suppression of GH levels in the oral glucose tolerance test (oGTT) using a sensitive GH immunoassay in a large group of normal adult subjects and treated acromegalic patients. We evaluated these results in conjunction with IGF-I and IGF binding protein 3 (IGFBP-3) levels. Nadir GH levels after the ingestion of 75 g of glucose, as well as baseline IGF-I and IGFBP-3 levels, were evaluated in 56 normal adult subjects and 32 previously treated acromegalic patients. GH was assayed by an immunofluorometric assay. Normal controls had a mean GH nadir of 0.07 +/- 0.09 microg/liter. Their mean basal IGF-I and IGFBP-3 levels were 160 +/- 58 microg/liter and 1926 +/- 497 microg/liter, respectively. Acromegalic patients had mean GH nadir, IGF-I, and IGFBP-3 levels higher than those of normal subjects (2.6 +/- 7.6 microg/liter, 313 +/- 246 microg/liter, and 2625 +/- 1154 microg/liter, respectively). Considering a GH cut-off value of 0.25 microg/liter, as the normalized postglucose GH upper limit (mean + 2 SD) and, therefore, the target for treated patients, only five patients (15.6%) would have been considered cured. These results suggest that the strict physiological normalization of GH levels after oGTT is not often achieved as a therapeutic endpoint in acromegaly. In addition to the refinement of GH assays, epidemiological studies have suggested that the mean basal GH levels (<2.5 microg/liter) or oGTT-derived GH levels < 2 microg/liter (RIA), or the normalization of IGF-I levels, appear to reduce morbidity and mortality in treated acromegaly. Using this epidemiologically based definition of cure for

  10. The Testability of Maximum Magnitude

    NASA Astrophysics Data System (ADS)

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.

    2012-12-01

    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  11. Maximum thrust mode evaluation

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Nobbs, Steven G.

    1995-01-01

    Measured reductions in acceleration times which resulted from the application of the F-15 performance seeking control (PSC) maximum thrust mode during the dual-engine test phase is presented as a function of power setting and flight condition. Data were collected at altitudes of 30,000 and 45,000 feet at military and maximum afterburning power settings. The time savings for the supersonic acceleration is less than at subsonic Mach numbers because of the increased modeling and control complexity. In addition, the propulsion system was designed to be optimized at the mid supersonic Mach number range. Recall that even though the engine is at maximum afterburner, PSC does not trim the afterburner for the maximum thrust mode. Subsonically at military power, time to accelerate from Mach 0.6 to 0.95 was cut by between 6 and 8 percent with a single engine application of PSC, and over 14 percent when both engines were optimized. At maximum afterburner, the level of thrust increases were similar in magnitude to the military power results, but because of higher thrust levels at maximum afterburner and higher aircraft drag at supersonic Mach numbers the percentage thrust increase and time to accelerate was less than for the supersonic accelerations. Savings in time to accelerate supersonically at maximum afterburner ranged from 4 to 7 percent. In general, the maximum thrust mode has performed well, demonstrating significant thrust increases at military and maximum afterburner power. Increases of up to 15 percent at typical combat-type flight conditions were identified. Thrust increases of this magnitude could be useful in a combat situation.

  12. Maximum efficiency of an autophase TWT

    NASA Astrophysics Data System (ADS)

    Bondarenko, B. N.; Dimashko, Iu. A.; Kryzhanovskii, V. G.

    1985-10-01

    Formulas are presented for the maximum efficiency of an autophase TWT. It is shown that the maximum efficiency is determined by the ohmic-loss coefficient and is achieved through a successive application of the isoadiabatic-amplification mode and the isoacceptance mode. The efficiency can reach a value of 75-80 percent; further increases may be achieved through an improvement of the capture quality.

  13. EPA Maximum Achievable Contraction of Technocrats Act of 2013

    THOMAS, 113th Congress

    Rep. Griffith, H. Morgan [R-VA-9

    2013-12-03

    12/16/2013 Referred to the Subcommittee on Horticulture, Research, Biotechnology, and Foreign Agriculture. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  14. Lunar Farming: Achieving Maximum Yield for the Exploration of Space

    NASA Technical Reports Server (NTRS)

    Salisbury, Frank B.

    1991-01-01

    A look at what it might be like on a lunar farm in the year 2020 is provided from the point of view of the farmer. Of necessity, the farm would be a Controlled Ecological (or Environment) Life-Support System (CELSS) or a bioregenerative life-support system. Topics covered in the imaginary trip through the farm are the light, water, gasses, crops, the medium used for plantings, and the required engineering. The CELSS is designed with four functioning parts: (1) A plant-production facility with higher plants and algae; (2) food technology kitchens; (3) waste processing and recycling facilities; and (4) control systems. In many cases there is not yet enough information to be sure about matters discussed, but the exercise in imagination pinpoints a number of areas that still need considerable research to resolve the problems perceived.

  15. Radiation engineering of optical antennas for maximum field enhancement.

    PubMed

    Seok, Tae Joon; Jamshidi, Arash; Kim, Myungki; Dhuey, Scott; Lakhani, Amit; Choo, Hyuck; Schuck, Peter James; Cabrini, Stefano; Schwartzberg, Adam M; Bokor, Jeffrey; Yablonovitch, Eli; Wu, Ming C

    2011-07-13

    Optical antennas have generated much interest in recent years due to their ability to focus optical energy beyond the diffraction limit, benefiting a broad range of applications such as sensitive photodetection, magnetic storage, and surface-enhanced Raman spectroscopy. To achieve the maximum field enhancement for an optical antenna, parameters such as the antenna dimensions, loading conditions, and coupling efficiency have been previously studied. Here, we present a framework, based on coupled-mode theory, to achieve maximum field enhancement in optical antennas through optimization of optical antennas' radiation characteristics. We demonstrate that the optimum condition is achieved when the radiation quality factor (Q(rad)) of optical antennas is matched to their absorption quality factor (Q(abs)). We achieve this condition experimentally by fabricating the optical antennas on a dielectric (SiO(2)) coated ground plane (metal substrate) and controlling the antenna radiation through optimizing the dielectric thickness. The dielectric thickness at which the matching condition occurs is approximately half of the quarter-wavelength thickness, typically used to achieve constructive interference, and leads to ∼20% higher field enhancement relative to a quarter-wavelength thick dielectric layer.

  16. Maximum life spur gear design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Mackulin, B. J.; Coe, H. H.; Coy, J. J.

    1991-01-01

    Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.

  17. Teaching for maximum learning: The Philippine experience

    NASA Astrophysics Data System (ADS)

    Sutaria, Minda C.

    1990-06-01

    The author tells about how the achievement level of Filipono grade school children is being improved through teaching for maximum learning. To promote teaching for maximum learning, it was imperative to identify minimum learning competencies in the new curriculum for each grade level, retrain teachers for teaching for maximum learning, develop appropriate instructional materials, improve the quality of supervision of instruction, install a multi-level (national to school) testing system and redress inequities in the distribution of human and material resources. This systematic approach to solving the problem of low quality of educational outcomes has resulted in a modest but steady improvement in the achievement levels of school children.

  18. The Effects of Head Start on Children's Kindergarten Retention, Reading and Math Achievement in Fall Kindergarten--An Application of Propensity Score Method and Sensitivity Analysis

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2009-01-01

    Using data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K), this paper applied optimal propensity score matching method to evaluate the effects of Head Start on children's kindergarten retention, reading and math achievement in fall kindergarten comparing with center-based care. Both parametric and nonparametric…

  19. Dye-sensitized solar cells employing a single film of mesoporous TiO2 beads achieve power conversion efficiencies over 10%.

    PubMed

    Sauvage, Frédéric; Chen, Dehong; Comte, Pascal; Huang, Fuzhi; Heiniger, Leo-Philipp; Cheng, Yi-Bing; Caruso, Rachel A; Graetzel, Michael

    2010-08-24

    Dye-sensitized solar cells employing mesoporous TiO(2) beads have demonstrated longer electron diffusion lengths and extended electron lifetimes over Degussa P25 titania electrodes due to the well interconnected, densely packed nanocrystalline TiO(2) particles inside the beads. Careful selection of the dye to match the dye photon absorption characteristics with the light scattering properties of the beads have improved the light harvesting and conversion efficiency of the bead electrode in the dye-sensitized solar cell. This has resulted in a solar to electric power conversion efficiency (PCE) of greater than 10% (10.6% for Ru(II)-based dye C101 and 10.7% using C106) for the first time using a single screen-printed titania layer cell construction (that is, without an additional scattering layer).

  20. Achievement of over 1.4 V photovoltage in a dye-sensitized solar cell by the application of a silyl-anchor coumarin dye

    PubMed Central

    Kakiage, Kenji; Osada, Hiroyuki; Aoyama, Yohei; Yano, Toru; Oya, Keiji; Iwamoto, Shinji; Fujisawa, Jun-ichi; Hanaya, Minoru

    2016-01-01

    A dye-sensitized solar cell (DSSC) fabricated by using a novel silyl-anchor coumarin dye with alkyl-chain substitutes, a Br3−/Br− redox electrolyte solution containing water, and a Mg2+-doped anatase-TiO2 electrode with twofold surface modification by MgO and Al2O3 exhibited an open-circuit photovoltage over 1.4 V, demonstrating the possibility of DSSCs as practical photovoltaic devices. PMID:27762401

  1. Synthesis of grafted phosphorylcholine polymer layers as specific recognition ligands for C-reactive protein focused on grafting density and thickness to achieve highly sensitive detection.

    PubMed

    Kamon, Yuri; Kitayama, Yukiya; Itakura, Akiko N; Fukazawa, Kyoko; Ishihara, Kazuhiko; Takeuchi, Toshifumi

    2015-04-21

    We studied the effects of layer thickness and grafting density of poly(2-methacryloyloxyethyl phosphorylcholine) (PMPC) thin layers as specific ligands for the highly sensitive binding of C-reactive protein (CRP). PMPC layer thickness was controlled by surface-initiated activators generated by electron transfer for atom transfer radical polymerization (AGET ATRP). PMPC grafting density was controlled by utilizing mixed self-assembled monolayers with different incorporation ratios of the bis[2-(2-bromoisobutyryloxy)undecyl] disulfide ATRP initiator, as modulated by altering the feed molar ratio with (11-mercaptoundecyl)tetra(ethylene glycol). X-ray photoelectron spectroscopy and ellipsometry measurements were used to characterize the modified surfaces. PMPC grafting densities were estimated from polymer thickness and the molecular weight obtained from sacrificial initiator during surface-initiated AGET ATRP. The effects of thickness and grafting density of the obtained PMPC layers on CRP binding performance were investigated using surface plasmon resonance employing a 10 mM Tris-HCl running buffer containing 140 mM NaCl and 2 mM CaCl2 (pH 7.4). Furthermore, the non-specific binding properties of the obtained layers were investigated using human serum albumin (HSA) as a reference protein. The PMPC layer which has 4.6 nm of thickness and 1.27 chains per nm(2) of grafting density showed highly sensitive CRP detection (limit of detection: 4.4 ng mL(-1)) with low non-specific HSA adsorption, which was improved 10 times than our previous report of 50 ng mL(-1). PMID:25783194

  2. Synthesis of grafted phosphorylcholine polymer layers as specific recognition ligands for C-reactive protein focused on grafting density and thickness to achieve highly sensitive detection.

    PubMed

    Kamon, Yuri; Kitayama, Yukiya; Itakura, Akiko N; Fukazawa, Kyoko; Ishihara, Kazuhiko; Takeuchi, Toshifumi

    2015-04-21

    We studied the effects of layer thickness and grafting density of poly(2-methacryloyloxyethyl phosphorylcholine) (PMPC) thin layers as specific ligands for the highly sensitive binding of C-reactive protein (CRP). PMPC layer thickness was controlled by surface-initiated activators generated by electron transfer for atom transfer radical polymerization (AGET ATRP). PMPC grafting density was controlled by utilizing mixed self-assembled monolayers with different incorporation ratios of the bis[2-(2-bromoisobutyryloxy)undecyl] disulfide ATRP initiator, as modulated by altering the feed molar ratio with (11-mercaptoundecyl)tetra(ethylene glycol). X-ray photoelectron spectroscopy and ellipsometry measurements were used to characterize the modified surfaces. PMPC grafting densities were estimated from polymer thickness and the molecular weight obtained from sacrificial initiator during surface-initiated AGET ATRP. The effects of thickness and grafting density of the obtained PMPC layers on CRP binding performance were investigated using surface plasmon resonance employing a 10 mM Tris-HCl running buffer containing 140 mM NaCl and 2 mM CaCl2 (pH 7.4). Furthermore, the non-specific binding properties of the obtained layers were investigated using human serum albumin (HSA) as a reference protein. The PMPC layer which has 4.6 nm of thickness and 1.27 chains per nm(2) of grafting density showed highly sensitive CRP detection (limit of detection: 4.4 ng mL(-1)) with low non-specific HSA adsorption, which was improved 10 times than our previous report of 50 ng mL(-1).

  3. These Shoes Are Made for Walking: Sensitivity Performance Evaluation of Commercial Activity Monitors under the Expected Conditions and Circumstances Required to Achieve the International Daily Step Goal of 10,000 Steps

    PubMed Central

    O’Connell, Sandra; ÓLaighin, Gearóid; Kelly, Lisa; Murphy, Elaine; Beirne, Sorcha; Burke, Niall; Kilgannon, Orlaith; Quinlan, Leo R.

    2016-01-01

    Introduction Physical activity is a vitally important part of a healthy lifestyle, and is of major benefit to both physical and mental health. A daily step count of 10,000 steps is recommended globally to achieve an appropriate level of physical activity. Accurate quantification of physical activity during conditions reflecting those needed to achieve the recommended daily step count of 10,000 steps is essential. As such, we aimed to assess four commercial activity monitors for their sensitivity/accuracy in a prescribed walking route that reflects a range of surfaces that would typically be used to achieve the recommended daily step count, in two types of footwear expected to be used throughout the day when aiming to achieve the recommended daily step count, and in a timeframe required to do so. Methods Four commercial activity monitors were worn simultaneously by participants (n = 15) during a prescribed walking route reflective of surfaces typically encountered while achieving the daily recommended 10,000 steps. Activity monitors tested were the Garmin Vivofit ™, New Lifestyles’ NL-2000 ™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2) ™, and Fitbit One ™. Results All activity monitors tested were accurate in their step detection over the variety of different surfaces tested (natural lawn grass, gravel, ceramic tile, tarmacadam/asphalt, linoleum), when wearing both running shoes and hard-soled dress shoes. Conclusion All activity monitors tested were accurate in their step detection sensitivity and are valid monitors for physical activity quantification over the variety of different surfaces tested, when wearing both running shoes and hard-soled dress shoes, and over a timeframe necessary for accumulating the recommended daily step count of 10,000 steps. However, it is important to consider the accuracy of activity monitors, particularly when physical activity in the form of stepping activities is prescribed as an intervention in the

  4. The last glacial maximum

    USGS Publications Warehouse

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  5. The Last Glacial Maximum.

    PubMed

    Clark, Peter U; Dyke, Arthur S; Shakun, Jeremy D; Carlson, Anders E; Clark, Jorie; Wohlfarth, Barbara; Mitrovica, Jerry X; Hostetler, Steven W; McCabe, A Marshall

    2009-08-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level approximately 14.5 ka.

  6. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  7. Maximum Entropy Fundamentals

    NASA Astrophysics Data System (ADS)

    Harremoeës, P.; Topsøe, F.

    2001-09-01

    In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over the development of natural

  8. The Solar Maximum observatory

    NASA Technical Reports Server (NTRS)

    Rust, D. M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots.

  9. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  10. Maximum bow force revisited.

    PubMed

    Mores, Robert

    2016-08-01

    Schelleng [J. Acoust. Soc. Am. 53, 26-41 (1973)], Askenfelt [J. Acoust. Soc. Am. 86, 503-516 (1989)], Schumacher [J. Acoust. Soc. Am. 96, 1985-1998 (1994)], and Schoonderwaldt, Guettler, and Askenfelt [Acta Acust. Acust. 94, 604-622 (2008)] formulated-in different ways-how the maximum bow force relates to bow velocity, bow-bridge distance, string impedance, and friction coefficients. Issues of uncertainty are how to account for friction or for the rotational admittance of the strings. Related measurements at the respective transitions between regimes of Helmholtz motion and non-Helmholtz motion employ a variety of bowing machines and stringed instruments. The related findings include all necessary parameters except the friction coefficients, leaving the underlying models unconfirmed. Here, a bowing pendulum has been constructed which allows precise measurement of relevant bowing parameters, including the friction coefficients. Two cellos are measured across all strings for three different bow-bridge distances. The empirical data suggest that-taking the diverse elements of existing models as options-Schelleng's model combined with Schumacher's velocity term yields the best fit. Furthermore, the pendulum employs a bow driving mechanism with adaptive impedance which discloses that mentioned regimes are stable and transitions between them sometimes require a hysteresis on related parameters. PMID:27586745

  11. Maximum entropy production in daisyworld

    NASA Astrophysics Data System (ADS)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  12. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic

    PubMed Central

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set–proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646

  13. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    PubMed

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  14. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    PubMed

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646

  15. Minimizing the probable maximum flood

    SciTech Connect

    Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )

    1994-06-01

    This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.

  16. Achieving yield gains in wheat.

    PubMed

    Reynolds, Matthew; Foulkes, John; Furbank, Robert; Griffiths, Simon; King, Julie; Murchie, Erik; Parry, Martin; Slafer, Gustavo

    2012-10-01

    Wheat provides 20% of calories and protein consumed by humans. Recent genetic gains are <1% per annum (p.a.), insufficient to meet future demand. The Wheat Yield Consortium brings expertise in photosynthesis, crop adaptation and genetics to a common breeding platform. Theory suggest radiation use efficiency (RUE) of wheat could be increased ~50%; strategies include modifying specificity, catalytic rate and regulation of Rubisco, up-regulating Calvin cycle enzymes, introducing chloroplast CO(2) concentrating mechanisms, optimizing light and N distribution of canopies while minimizing photoinhibition, and increasing spike photosynthesis. Maximum yield expression will also require dynamic optimization of source: sink so that dry matter partitioning to reproductive structures is not at the cost of the roots, stems and leaves needed to maintain physiological and structural integrity. Crop development should favour spike fertility to maximize harvest index so phenology must be tailored to different photoperiods, and sensitivity to unpredictable weather must be modulated to reduce conservative responses that reduce harvest index. Strategic crossing of complementary physiological traits will be augmented with wide crossing, while genome-wide selection and high throughput phenotyping and genotyping will increase efficiency of progeny screening. To ensure investment in breeding achieves agronomic impact, sustainable crop management must also be promoted through crop improvement networks.

  17. Evolving Sensitivity Balances Boolean Networks

    PubMed Central

    Luo, Jamie X.; Turner, Matthew S.

    2012-01-01

    We investigate the sensitivity of Boolean Networks (BNs) to mutations. We are interested in Boolean Networks as a model of Gene Regulatory Networks (GRNs). We adopt Ribeiro and Kauffman’s Ergodic Set and use it to study the long term dynamics of a BN. We define the sensitivity of a BN to be the mean change in its Ergodic Set structure under all possible loss of interaction mutations. Insilico experiments were used to selectively evolve BNs for sensitivity to losing interactions. We find that maximum sensitivity was often achievable and resulted in the BNs becoming topologically balanced, i.e. they evolve towards network structures in which they have a similar number of inhibitory and excitatory interactions. In terms of the dynamics, the dominant sensitivity strategy that evolved was to build BNs with Ergodic Sets dominated by a single long limit cycle which is easily destabilised by mutations. We discuss the relevance of our findings in the context of Stem Cell Differentiation and propose a relationship between pluripotent stem cells and our evolved sensitive networks. PMID:22586459

  18. Finding maximum colorful subtrees in practice.

    PubMed

    Rauf, Imran; Rasche, Florian; Nicolas, François; Böcker, Sebastian

    2013-04-01

    In metabolomics and other fields dealing with small compounds, mass spectrometry is applied as a sensitive high-throughput technique. Recently, fragmentation trees have been proposed to automatically analyze the fragmentation mass spectra recorded by such instruments. Computationally, this leads to the problem of finding a maximum weight subtree in an edge-weighted and vertex-colored graph, such that every color appears, at most once in the solution. We introduce new heuristics and an exact algorithm for this Maximum Colorful Subtree problem and evaluate them against existing algorithms on real-world and artificial datasets. Our tree completion heuristic consistently scores better than other heuristics, while the integer programming-based algorithm produces optimal trees with modest running times. Our fast and accurate heuristic can help determine molecular formulas based on fragmentation trees. On the other hand, optimal trees from the integer linear program are useful if structure is relevant, for example for tree alignments.

  19. Arctic Sea Ice Maximum 2011

    NASA Video Gallery

    AMSR-E Arctic Sea Ice: September 2010 to March 2011: Scientists tracking the annual maximum extent of Arctic sea ice said that 2011 was among the lowest ice extents measured since satellites began ...

  20. Principles of maximum entropy and maximum caliber in statistical physics

    NASA Astrophysics Data System (ADS)

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A.

    2013-07-01

    The variational principles called maximum entropy (MaxEnt) and maximum caliber (MaxCal) are reviewed. MaxEnt originated in the statistical physics of Boltzmann and Gibbs, as a theoretical tool for predicting the equilibrium states of thermal systems. Later, entropy maximization was also applied to matters of information, signal transmission, and image reconstruction. Recently, since the work of Shore and Johnson, MaxEnt has been regarded as a principle that is broader than either physics or information alone. MaxEnt is a procedure that ensures that inferences drawn from stochastic data satisfy basic self-consistency requirements. The different historical justifications for the entropy S=-∑ipilog⁡pi and its corresponding variational principles are reviewed. As an illustration of the broadening purview of maximum entropy principles, maximum caliber, which is path entropy maximization applied to the trajectories of dynamical systems, is also reviewed. Examples are given in which maximum caliber is used to interpret dynamical fluctuations in biology and on the nanoscale, in single-molecule and few-particle systems such as molecular motors, chemical reactions, biological feedback circuits, and diffusion in microfluidics devices.

  1. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  2. The Maximum Density of Water.

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.

    1985-01-01

    Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)

  3. Abolishing the maximum tension principle

    NASA Astrophysics Data System (ADS)

    Dąbrowski, Mariusz P.; Gohar, H.

    2015-09-01

    We find the series of example theories for which the relativistic limit of maximum tension Fmax =c4 / 4 G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  4. Maximum entropy beam diagnostic tomography

    SciTech Connect

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs.

  5. Maximum cooling and maximum efficiency of thermoacoustic refrigerators

    NASA Astrophysics Data System (ADS)

    Tartibu, L. K.

    2016-01-01

    This work provides valid experimental evidence on the difference between design for maximum cooling and maximum efficiency for thermoacoustic refrigerators. In addition, the influence of the geometry of the honeycomb ceramic stack on the performance of thermoacoustic refrigerators is presented as it affects the cooling power. Sixteen cordierite honeycomb ceramic stacks with square cross sections having four different lengths of 26, 48, 70 and 100 mm are considered. Measurements are taken at six different locations of the stack hot ends from the pressure antinode, namely 100, 200, 300, 400, 500 and 600 mm respectively. Measurement of temperature difference across the stack ends at steady state for different stack geometries are used to compute the cooling load and the coefficient of performance. The results obtained with atmospheric air showed that there is a distinct optimum depending on the design goal.

  6. The maximum rate of mammal evolution.

    PubMed

    Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D

    2012-03-13

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  7. The maximum rate of mammal evolution

    NASA Astrophysics Data System (ADS)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-03-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  8. The maximum rate of mammal evolution.

    PubMed

    Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D

    2012-03-13

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461

  9. The maximum rate of mammal evolution

    PubMed Central

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-01-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461

  10. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed backcone distance of the spiral bevel gear set for a specified speed reduction, shaft angle, input torque, and power. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  11. Middle Holocene thermal maximum in eastern Beringia

    NASA Astrophysics Data System (ADS)

    Kaufman, D. S.; Bartlein, P. J.

    2015-12-01

    A new systematic review of diverse Holocene paleoenvironmental records (Kaufman et al., Quat. Sci. Rev., in revision) has clarified the primary multi-centennial- to millennial-scale trends across eastern Beringia (Alaska, westernmost Canada and adjacent seas). Composite time series from midges, pollen, and biogeochemical indicators are compared with new summaries of mountain-glacier and lake-level fluctuations, terrestrial water-isotope records, sea-ice and sea-surface-temperature analyses, and peatland and thaw-lake initiation frequencies. The paleo observations are also compared with recently published simulations (Bartlein et al., Clim. Past Discuss., 2015) that used a regional climate model to simulate the effects of global and regional-scale forcings at 11 and 6 ka. During the early Holocene (11.5-8 ka), rather than a prominent thermal maximum as suggested previously, the newly compiled paleo evidence (mostly sensitive to summer conditions) indicates that temperatures were highly variable, at times both higher and lower than present, although the overall lowest average temperatures occurred during the earliest Holocene. During the middle Holocene (8-4 ka), glaciers retreated as the regional average temperature increased to a maximum between 7 and 5 ka, as reflected in most proxy types. The paleo evidence for low and variable temperatures during the early Holocene contrasts with more uniformly high temperatures during the middle Holocene and agrees with the climate simulations, which show that temperature in eastern Beringia was on average lower at 11 ka and higher at 6 ka than at present (pre-industrial). Low temperatures during the early Holocene can be attributed in part to the summer chilling caused by flooding the continental shelves, whereas the mid-Holocene thermal maximum was likely driven by the loss of the Laurentide ice sheet, rise in greenhouse gases, higher-than-present summer insolation, and expansion of forest over tundra.

  12. Highly sensitive magnetic field sensor based on microfiber coupler with magnetic fluid

    SciTech Connect

    Luo, Longfeng; Pu, Shengli Tang, Jiali; Zeng, Xianglong; Lahoubi, Mahieddine

    2015-05-11

    A kind of magnetic field sensor using a microfiber coupler (MFC) surrounded with magnetic fluid (MF) is proposed and experimentally demonstrated. As the MFC is strongly sensitive to the surrounding refractive index (RI) and MF's RI is sensitive to magnetic field, the magnetic field sensing function of the proposed structure is realized. Interrogation of magnetic field strength is achieved by measuring the dip wavelength shift and transmission loss change of the transmission spectrum. The experimental results show that the sensitivity of the sensor is wavelength-dependent. The maximum sensitivity of 191.8 pm/Oe is achieved at wavelength of around 1537 nm in this work. In addition, a sensitivity of −0.037 dB/Oe is achieved by monitoring variation of the fringe visibility. These suggest the potential applications of the proposed structure in tunable all-in-fiber photonic devices such as magneto-optical modulator, filter, and sensing.

  13. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  14. Nitrogen-sensitive thermionic detection in microcolumn liquid chromatography.

    PubMed

    Gluckman, J C; Novotny, M

    1985-10-01

    The dual-flame thermionic detector for microcolumn liquid chromatography has been improved and optimized for nitrogen sensitivity. The total column effluent is concentrically nebulized and aspirated directly into an air-hydrogen diffusion flame, while detection limits of 1.4 X 10(-11) g nitrogen/sec at the maximum of a Gaussian peak are achieved. Detection linearity spans three orders of magnitude. An example of the analysis of underivatized barbiturate standards is provided.

  15. Discrimination networks for maximum selection.

    PubMed

    Jain, Brijnesh J; Wysotzki, Fritz

    2004-01-01

    We construct a novel discrimination network using differentiating units for maximum selection. In contrast to traditional competitive architectures like MAXNET the discrimination network does not only signal the winning unit, but also provides information about its evidence. In particular, we show that a discrimination network converges to a stable state within finite time and derive three characteristics: intensity normalization (P1), contrast enhancement (P2), and evidential response (P3). In order to improve the accuracy of the evidential response we incorporate distributed redundancy into the network. This leads to a system which is not only robust against failure of single units and noisy data, but also enables us to sharpen the focus on the problem given in terms of a more accurate evidential response. The proposed discrimination network can be regarded as a connectionist model for competitive learning by evidence.

  16. Graded Achievement, Tested Achievement, and Validity

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  17. Sensitivity Test Analysis

    1992-02-20

    SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less

  18. Multiple Early Eocene Thermal Maximums

    NASA Astrophysics Data System (ADS)

    Roehl, U.; Zachos, J. C.; Thomas, E.; Kelly, D. C.; Donner, B.; Westerhold, T.

    2004-12-01

    Periodic dissolution horizons signifying abrupt shoaling of the lysocline and CCD are characteristic features of deep-sea sections and often attributed to Milankovitch forcing via their diagnostic frequencies. Prominent dissolution horizons also correspond to abrupt climate events, such as the Paleocene-Eocene thermal maximum (PETM), as a result of input of significant CH4 - CO2 into the ocean-atmosphere system. The question arises whether other significant dissolution horizons identified in sediments of late Paleocene and early Eocene age similar to the recently identified ELMO (Lourens et al., 2004) were formed as a result of greenhouse gas input, or whether they were related to cumulative effects of periodic changes in ocean chemistry and circulation. Here we report the discovery of a 3rd thermal maximum in early Eocene (about 52 Ma) sediments recovered from the South Atlantic during ODP Leg 208. The prominent clay layer was named the "X" event and was identified within planktonic foraminifer zone P7 and calcareous nannofossil zone CP10 at four Walvis Ridge Transect sites with a water depth range of 2000 m (Sites 1262 to 1267). Benthics assemblages are composed of small individuals, have low diversity and high dominance. Dominant taxa are Nuttallides truempyi and various abyssaminids, resembling the post PETM extinction assemblages. High-resolution bulk carbonate \\delta13C measurements of one of the more shallow Sites 1265 reveal a rapid about 0.6 per mill drop in \\delta13C and \\delta18O followed by an exponential recovery to pre-excursion \\delta13C values well known for the PETM and also observed for the ELMO. The planktonic foraminiferal \\delta13C records of Morozovella subbotina and Acaranina soldadoensis in the deepest Site 1262 show a 0.8 to 0.9 per mill drop, whereas the \\delta13C drop of benthic foraminifera Nuttallides truempyi is slightly larger (about 1 per mill). We are evaluating mechanisms for the widespread change in deep-water chemistry, its

  19. The maximum drag reduction asymptote

    NASA Astrophysics Data System (ADS)

    Choueiri, George H.; Hof, Bjorn

    2015-11-01

    Addition of long chain polymers is one of the most efficient ways to reduce the drag of turbulent flows. Already very low concentration of polymers can lead to a substantial drag and upon further increase of the concentration the drag reduces until it reaches an empirically found limit, the so called maximum drag reduction (MDR) asymptote, which is independent of the type of polymer used. We here carry out a detailed experimental study of the approach to this asymptote for pipe flow. Particular attention is paid to the recently observed state of elasto-inertial turbulence (EIT) which has been reported to occur in polymer solutions at sufficiently high shear. Our results show that upon the approach to MDR Newtonian turbulence becomes marginalized (hibernation) and eventually completely disappears and is replaced by EIT. In particular, spectra of high Reynolds number MDR flows are compared to flows at high shear rates in small diameter tubes where EIT is found at Re < 100. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° [291734].

  20. Objects of Maximum Electromagnetic Chirality

    NASA Astrophysics Data System (ADS)

    Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten

    2016-07-01

    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.

  1. The Principle of Maximum Conformality

    SciTech Connect

    Brodsky, Stanley J; Giustino, Di; /SLAC

    2011-04-05

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale of the running coupling {alpha}{sub s}({mu}{sup 2}). It is common practice to guess a physical scale {mu} = Q which is of order of a typical momentum transfer Q in the process, and then vary the scale over a range Q/2 and 2Q. This procedure is clearly problematic since the resulting fixed-order pQCD prediction will depend on the renormalization scheme, and it can even predict negative QCD cross sections at next-to-leading-order. Other heuristic methods to set the renormalization scale, such as the 'principle of minimal sensitivity', give unphysical results for jet physics, sum physics into the running coupling not associated with renormalization, and violate the transitivity property of the renormalization group. Such scale-setting methods also give incorrect results when applied to Abelian QED. Note that the factorization scale in QCD is introduced to match nonperturbative and perturbative aspects of the parton distributions in hadrons; it is present even in conformal theory and thus is a completely separate issue from renormalization scale setting. The PMC provides a consistent method for determining the renormalization scale in pQCD. The PMC scale-fixed prediction is independent of the choice of renormalization scheme, a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC global scale can be derived efficiently at NLO from basic properties of the PQCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increases the precision of QCD tests, but it will also increase the sensitivity of colliders to new physics beyond the Standard Model.

  2. 20 CFR 228.14 - Family maximum.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Family maximum. 228.14 Section 228.14... SURVIVOR ANNUITIES The Tier I Annuity Component § 228.14 Family maximum. (a) Family maximum defined. Under... person's earnings record is limited. This limited amount is called the family maximum. The family...

  3. 20 CFR 228.14 - Family maximum.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Family maximum. 228.14 Section 228.14... SURVIVOR ANNUITIES The Tier I Annuity Component § 228.14 Family maximum. (a) Family maximum defined. Under... person's earnings record is limited. This limited amount is called the family maximum. The family...

  4. 20 CFR 228.14 - Family maximum.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Family maximum. 228.14 Section 228.14... SURVIVOR ANNUITIES The Tier I Annuity Component § 228.14 Family maximum. (a) Family maximum defined. Under... person's earnings record is limited. This limited amount is called the family maximum. The family...

  5. 20 CFR 228.14 - Family maximum.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Family maximum. 228.14 Section 228.14... SURVIVOR ANNUITIES The Tier I Annuity Component § 228.14 Family maximum. (a) Family maximum defined. Under... person's earnings record is limited. This limited amount is called the family maximum. The family...

  6. 25 CFR 273.4 - Policy of maximum Indian participation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Policy of maximum Indian participation. 273.4 Section 273.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR INDIAN SELF-DETERMINATION AND... achievement and satisfaction which education can and should provide. Consistent with this concept,...

  7. 25 CFR 273.4 - Policy of maximum Indian participation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Policy of maximum Indian participation. 273.4 Section 273.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR INDIAN SELF-DETERMINATION AND... achievement and satisfaction which education can and should provide. Consistent with this concept,...

  8. 25 CFR 273.4 - Policy of maximum Indian participation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Policy of maximum Indian participation. 273.4 Section 273.4 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR INDIAN SELF-DETERMINATION AND... achievement and satisfaction which education can and should provide. Consistent with this concept,...

  9. 76 FR 1504 - Pipeline Safety: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating Pressure (MOP), and to utilize these risk analyses in the identification of appropriate assessment...

  10. Maximum entropy principal for transportation

    SciTech Connect

    Bilich, F.; Da Silva, R.

    2008-11-06

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  11. Spacecraft Maximum Allowable Concentrations for Airborne Contaminants

    NASA Technical Reports Server (NTRS)

    James, John T.

    2008-01-01

    The enclosed table lists official spacecraft maximum allowable concentrations (SMACs), which are guideline values set by the NASA/JSC Toxicology Group in cooperation with the National Research Council Committee on Toxicology (NRCCOT). These values should not be used for situations other than human space flight without careful consideration of the criteria used to set each value. The SMACs take into account a number of unique factors such as the effect of space-flight stress on human physiology, the uniform good health of the astronauts, and the absence of pregnant or very young individuals. Documentation of the values is given in a 5 volume series of books entitled "Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants" published by the National Academy Press, Washington, D.C. These books can be viewed electronically at http://books.nap.edu/openbook.php?record_id=9786&page=3. Short-term (1 and 24 hour) SMACs are set to manage accidental releases aboard a spacecraft and permit risk of minor, reversible effects such as mild mucosal irritation. In contrast, the long-term SMACs are set to fully protect healthy crewmembers from adverse effects resulting from continuous exposure to specific air pollutants for up to 1000 days. Crewmembers with allergies or unusual sensitivity to trace pollutants may not be afforded complete protection, even when long-term SMACs are not exceeded. Crewmember exposures involve a mixture of contaminants, each at a specific concentration (C(sub n)). These contaminants could interact to elicit symptoms of toxicity even though individual contaminants do not exceed their respective SMACs. The air quality is considered acceptable when the toxicity index (T(sub grp)) for each toxicological group of compounds is less than 1, where T(sub grp), is calculated as follows: T(sub grp) = C(sub 1)/SMAC(sub 1) + C(sub 2/SMAC(sub 2) + ...+C(sub n)/SMAC(sub n).

  12. Mentoring Emotionally Sensitive Individuals.

    ERIC Educational Resources Information Center

    Shaughnessy, Michael F.; Self, Elizabeth

    Mentoring individuals who are gifted, talented, and creative, but somewhat emotionally sensitive is a challenging and provocative arena. Several reasons individuals experience heightened sensitivity include: lack of nurturing, abuse, alcoholism in the family, low self-esteem, unrealistic parental expectations, and parental pressure to achieve.…

  13. Tuned cavity magnetometer sensitivity.

    SciTech Connect

    Okandan, Murat; Schwindt, Peter

    2009-09-01

    We have developed a high sensitivity (achieve similar sensitivity levels.

  14. Maximum Parsimony on Phylogenetic networks

    PubMed Central

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  15. Raman-Enhanced Phase-Sensitive Fibre Optical Parametric Amplifier.

    PubMed

    Fu, Xuelei; Guo, Xiaojie; Shu, Chester

    2016-01-01

    Phase-sensitive amplification is of great research interest owing to its potential in noiseless amplification. One key feature in a phase-sensitive amplifier is the gain extinction ratio defined as the ratio of the maximum to the minimum gains. It quantifies the capability of the amplifier in performing low-noise amplification for high phase-sensitive gain. Considering a phase-sensitive fibre optical parametric amplifier for linear amplification, the gain extinction ratio increases with the phase-insensitive parametric gain achieved from the same pump. In this work, we use backward Raman amplification to increase the phase-insensitive parametric gain, which in turn improves the phase-sensitive operation. Using a 955 mW Raman pump, the gain extinction ratio is increased by 9.2 dB. The improvement in the maximum phase-sensitive gain is 18.7 dB. This scheme can significantly boost the performance of phase-sensitive amplification in a spectral range where the parametric pump is not sufficiently strong but broadband Raman amplification is available.

  16. Raman-Enhanced Phase-Sensitive Fibre Optical Parametric Amplifier

    PubMed Central

    Fu, Xuelei; Guo, Xiaojie; Shu, Chester

    2016-01-01

    Phase-sensitive amplification is of great research interest owing to its potential in noiseless amplification. One key feature in a phase-sensitive amplifier is the gain extinction ratio defined as the ratio of the maximum to the minimum gains. It quantifies the capability of the amplifier in performing low-noise amplification for high phase-sensitive gain. Considering a phase-sensitive fibre optical parametric amplifier for linear amplification, the gain extinction ratio increases with the phase-insensitive parametric gain achieved from the same pump. In this work, we use backward Raman amplification to increase the phase-insensitive parametric gain, which in turn improves the phase-sensitive operation. Using a 955 mW Raman pump, the gain extinction ratio is increased by 9.2 dB. The improvement in the maximum phase-sensitive gain is 18.7 dB. This scheme can significantly boost the performance of phase-sensitive amplification in a spectral range where the parametric pump is not sufficiently strong but broadband Raman amplification is available. PMID:26830136

  17. Hydraulic Limits on Maximum Plant Transpiration

    NASA Astrophysics Data System (ADS)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  18. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  19. 14 CFR 1261.102 - Maximum amount.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Maximum amount. 1261.102 Section 1261.102...) Employees' Personal Property Claims § 1261.102 Maximum amount. From October 1, 1982, to October 30, 1988, the maximum amount that may be paid under the Military Personnel and Civilian Employees' Claim Act...

  20. 14 CFR 1261.102 - Maximum amount.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Maximum amount. 1261.102 Section 1261.102...) Employees' Personal Property Claims § 1261.102 Maximum amount. From October 1, 1982, to October 30, 1988, the maximum amount that may be paid under the Military Personnel and Civilian Employees' Claim Act...

  1. 49 CFR 107.329 - Maximum penalties.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... maximum civil penalty is $110,000 if the violation results in death, serious illness or severe injury to... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum penalties. 107.329 Section 107.329... PROGRAM PROCEDURES Enforcement Compliance Orders and Civil Penalties § 107.329 Maximum penalties. (a)...

  2. 49 CFR 107.329 - Maximum penalties.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... maximum civil penalty is $110,000 if the violation results in death, serious illness or severe injury to... 49 Transportation 2 2011-10-01 2011-10-01 false Maximum penalties. 107.329 Section 107.329... PROGRAM PROCEDURES Enforcement Compliance Orders and Civil Penalties § 107.329 Maximum penalties. (a)...

  3. 20 CFR 228.14 - Family maximum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... person's earnings record is limited. This limited amount is called the family maximum. The family maximum... the persons entitled to benefits on the insured individual's compensation would, except for the.... The maximum is computed as follows: (i) 150 percent of the first $230 of the individual's...

  4. 49 CFR 107.329 - Maximum penalties.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... maximum civil penalty is $110,000 if the violation results in death, serious illness or severe injury to... 49 Transportation 2 2012-10-01 2012-10-01 false Maximum penalties. 107.329 Section 107.329... PROGRAM PROCEDURES Enforcement Compliance Orders and Civil Penalties § 107.329 Maximum penalties. (a)...

  5. 40 CFR 63.43 - Maximum achievable control technology (MACT) determinations for constructed and reconstructed...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS...) of the Act. (4) If the Administrator has either proposed a relevant emission standard pursuant to... the MACT emission limitation or standard as determined according to the principles set forth...

  6. 40 CFR 63.43 - Maximum achievable control technology (MACT) determinations for constructed and reconstructed...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS...) of the Act. (4) If the Administrator has either proposed a relevant emission standard pursuant to... the MACT emission limitation or standard as determined according to the principles set forth...

  7. 40 CFR 63.43 - Maximum achievable control technology (MACT) determinations for constructed and reconstructed...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS...) of the Act. (4) If the Administrator has either proposed a relevant emission standard pursuant to... the MACT emission limitation or standard as determined according to the principles set forth...

  8. 40 CFR 63.43 - Maximum achievable control technology (MACT) determinations for constructed and reconstructed...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS...) of the Act. (4) If the Administrator has either proposed a relevant emission standard pursuant to... the MACT emission limitation or standard as determined according to the principles set forth...

  9. Optimal thickness of silicon membranes to achieve maximum thermoelectric efficiency: A first principles study

    NASA Astrophysics Data System (ADS)

    Mangold, Claudia; Neogi, Sanghamitra; Donadio, Davide

    2016-08-01

    Silicon nanostructures with reduced dimensionality, such as nanowires, membranes, and thin films, are promising thermoelectric materials, as they exhibit considerably reduced thermal conductivity. Here, we utilize density functional theory and Boltzmann transport equation to compute the electronic properties of ultra-thin crystalline silicon membranes with thickness between 1 and 12 nm. We predict that an optimal thickness of ˜7 nm maximizes the thermoelectric figure of merit of membranes with native oxide surface layers. Further thinning of the membranes, although attainable in experiments, reduces the electrical conductivity and worsens the thermoelectric efficiency.

  10. Maximum patch method for directional dark matter detection

    SciTech Connect

    Henderson, Shawn; Monroe, Jocelyn; Fisher, Peter

    2008-07-01

    Present and planned dark matter detection experiments search for WIMP-induced nuclear recoils in poorly known background conditions. In this environment, the maximum gap statistical method provides a way of setting more sensitive cross section upper limits by incorporating known signal information. We give a recipe for the numerical calculation of upper limits for planned directional dark matter detection experiments, that will measure both recoil energy and angle, based on the gaps between events in two-dimensional phase space.

  11. Comparing Science Achievement Constructs: Targeted and Achieved

    ERIC Educational Resources Information Center

    Ferrara, Steve; Duncan, Teresa

    2011-01-01

    This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…

  12. Mobility and Reading Achievement.

    ERIC Educational Resources Information Center

    Waters, Theresa Z.

    A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…

  13. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Minimum and maximum deposits; maximum time... Capital Construction Fund Agreement § 259.34 Minimum and maximum deposits; maximum time to deposit. (a... than prescribed herein: Provided, The party demonstrates to the Secretary's satisfaction...

  14. Interplanetary monitoring platform engineering history and achievements

    NASA Technical Reports Server (NTRS)

    Butler, P. M.

    1980-01-01

    In the fall of 1979, last of ten Interplanetary Monitoring Platform Satellite (IMP) missions ended a ten year series of flights dedicated to obtaining new knowledge of the radiation effects in outer space and of solar phenomena during a period of maximum solar flare activity. The technological achievements and scientific accomplishments from the IMP program are described.

  15. 5 CFR 1600.22 - Maximum contributions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum contributions. 1600.22 Section 1600.22 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD EMPLOYEE CONTRIBUTION ELECTIONS AND CONTRIBUTION ALLOCATIONS Program of Contributions § 1600.22 Maximum contributions. (a)...

  16. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... total wages (see 20 CFR 404.203(m)) for the second year before the individual dies or becomes eligible... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a)...

  17. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... total wages (see 20 CFR 404.203(m)) for the second year before the individual dies or becomes eligible... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a)...

  18. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... total wages (see 20 CFR 404.203(m)) for the second year before the individual dies or becomes eligible... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a)...

  19. Maximum entropy image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Bara, N.; Murata, K.

    1981-07-01

    The maximum entropy method is applied to image reconstruction from projections, of which angular view is restricted. The relaxation parameters are introduced to the maximum entropy reconstruction and after iteration the median filtering is implemented. These procedures improve the quality of the reconstructed image from noisy projections

  20. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... total wages (see 20 CFR 404.203(m)) for the second year before the individual dies or becomes eligible... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a)...

  1. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Maximum grants. 1778.11 Section 1778.11 Agriculture... (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not... the filing of an application. (b) Grants made for repairs, partial replacement, or...

  2. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Maximum grants. 1778.11 Section 1778.11 Agriculture... (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not... the filing of an application. (b) Grants made for repairs, partial replacement, or...

  3. 13 CFR 130.440 - Maximum grant.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Maximum grant. 130.440 Section 130... § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of all SBDC grants as determined by the statutory formula set...

  4. 13 CFR 130.440 - Maximum grant.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum grant. 130.440 Section 130... § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of all SBDC grants as determined by the statutory formula set...

  5. 13 CFR 130.440 - Maximum grant.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Maximum grant. 130.440 Section 130... § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the minimum statutory amount, or its pro rata share of all SBDC grants as determined by the statutory formula set...

  6. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

  7. 13 CFR 130.440 - Maximum grant.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Maximum grant. 130.440 Section 130.440 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS DEVELOPMENT CENTERS § 130.440 Maximum grant. No recipient shall receive an SBDC grant exceeding the greater of the...

  8. 49 CFR 107.329 - Maximum penalties.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... violation, except the maximum civil penalty is $175,000 if the violation results in death, serious illness... civil penalty is $175,000 if the violation results in death, serious illness or severe injury to any... 49 Transportation 2 2013-10-01 2013-10-01 false Maximum penalties. 107.329 Section...

  9. 49 CFR 107.329 - Maximum penalties.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... violation, except the maximum civil penalty is $175,000 if the violation results in death, serious illness... civil penalty is $175,000 if the violation results in death, serious illness or severe injury to any... 49 Transportation 2 2014-10-01 2014-10-01 false Maximum penalties. 107.329 Section...

  10. Co-sensitization of natural dyes for improved efficiency in dye-sensitized solar cell application

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ashok; Subalakshmi, K.; Senthilselvan, J.

    2016-05-01

    In this paper, a new approach of co-sensitized DSSC based on natural dyes is investigated to explore the possible way to improve the power conversion efficiency. To realize this purpose 10 DSSC devices were fabricated using mono-sensitization and co-sensitization of ethanolic extracts of natural dye sensitizers obtained from Cactus fruit, Jambolana fruit, Curcumin and Bermuda grass. The optical absorption spectrum of the mono and hybrid dye extracts were studied by UV-Visible absorption spectrum. It shows the characteristic absorption peaks in visible region corresponds to the presence of natural pigments of anthocyanin, betacyanin and chlorophylls. Absorption spectrum of hybrid dyes reveals a wide absorption band in visible region with improved extinction co-efficient and it is favorable for increased light harvesting nature. The power conversion efficiency of DSSC devices were calculated using J-V curve and the maximum efficiency achieved in the present work is noted to be ~0.61% for Cactus-Bermuda co-sensitized DSSC.

  11. Estimating the maximum potential revenue for grid connected electricity storage :

    SciTech Connect

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.

  12. General Achievement Trends: Oklahoma

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  13. General Achievement Trends: Georgia

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  14. General Achievement Trends: Nebraska

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  15. General Achievement Trends: Arkansas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  16. General Achievement Trends: Maryland

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  17. General Achievement Trends: Maine

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  18. General Achievement Trends: Iowa

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  19. General Achievement Trends: Texas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  20. General Achievement Trends: Hawaii

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  1. General Achievement Trends: Kansas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  2. General Achievement Trends: Florida

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  3. General Achievement Trends: Massachusetts

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  4. General Achievement Trends: Tennessee

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  5. General Achievement Trends: Alabama

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  6. General Achievement Trends: Virginia

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  7. General Achievement Trends: Michigan

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  8. General Achievement Trends: Colorado

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  9. Inverting the Achievement Pyramid

    ERIC Educational Resources Information Center

    White-Hood, Marian; Shindel, Melissa

    2006-01-01

    Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…

  10. Achievement Test Program.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.

    The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…

  11. School Effects on Achievement.

    ERIC Educational Resources Information Center

    Nichols, Robert C.

    The New York State Education Department conducts a Pupil Evaluation Program (PEP) in which each year all third, sixth, and ninth grade students in the state are given a series of achievement tests in reading and mathematics. The data accumulated by the department includes achievement test scores, teacher characteristics, building and curriculum…

  12. Heritability of Creative Achievement

    ERIC Educational Resources Information Center

    Piffer, Davide; Hur, Yoon-Mi

    2014-01-01

    Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…

  13. Confronting the Achievement Gap

    ERIC Educational Resources Information Center

    Gardner, David

    2007-01-01

    This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…

  14. Achieving Public Schools

    ERIC Educational Resources Information Center

    Abowitz, Kathleen Knight

    2011-01-01

    Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…

  15. States that give the maximum signal-to-quantum noise ratio for a fixed energy

    NASA Technical Reports Server (NTRS)

    Yuen, H. P.

    1976-01-01

    Under a radiation power constraint, the maximum signal-to-quantum noise ratio obtainable for any state of a radiation field is found. This maximum value is achieved by the two-photon coherent states introduced previously to describe two-photon lasers.

  16. Sensitivity of Achievement Estimation to Conditioning Model Misclassification

    ERIC Educational Resources Information Center

    Rutkowski, Leslie

    2014-01-01

    Large-scale assessment programs such as the National Assessment of Educational Progress (NAEP), Trends in International Mathematics and Science Study (TIMSS), and Programme for International Student Assessment (PISA) use a sophisticated assessment administration design called matrix sampling that minimizes the testing burden on individual…

  17. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  18. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  19. Student Achievement Factors

    ERIC Educational Resources Information Center

    Bertolini, Katherine; Stremmel, Andrew; Thorngren, Jill

    2012-01-01

    Effective practices for education are essential to insure public investment in our schools provides the maximum yield for our students, communities, states, and nation. The challenge has been defining and measuring terms such as effective, proficient, and sufficient when we examine instructional practice, student outcomes and funding equity. This…

  20. On the efficiency at maximum cooling power

    NASA Astrophysics Data System (ADS)

    Apertet, Y.; Ouerdane, H.; Michot, A.; Goupil, C.; Lecoeur, Ph.

    2013-08-01

    The efficiency at maximum power (EMP) of heat engines operating as generators is one corner stone of finite-time thermodynamics, the Curzon-Ahlborn efficiency \\eta_CA being considered as a universal upper bound. Yet, no valid counterpart to \\eta_CA has been derived for the efficiency at maximum cooling power (EMCP) for heat engines operating as refrigerators. In this letter we analyse the reasons of the failure to obtain such a bound and we demonstrate that, despite the introduction of several optimisation criteria, the maximum cooling power condition should be considered as the genuine equivalent of maximum power condition in the finite-time thermodynamics frame. We then propose and discuss an analytic expression for the EMCP in the specific case of exoreversible refrigerators.

  1. 14 CFR 65.47 - Maximum hours.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Air Traffic Control Tower Operators § 65.47 Maximum hours. Except in an emergency, a certificated air traffic control tower operator must be relieved of all...

  2. 14 CFR 65.47 - Maximum hours.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Air Traffic Control Tower Operators § 65.47 Maximum hours. Except in an emergency, a certificated air traffic control tower operator must be relieved of all...

  3. 14 CFR 65.47 - Maximum hours.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Air Traffic Control Tower Operators § 65.47 Maximum hours. Except in an emergency, a certificated air traffic control tower operator must be relieved of all...

  4. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  5. Maximum forces and deflections from orthodontic appliances.

    PubMed

    Burstone, C J; Goldberg, A J

    1983-08-01

    The maximum bending moment of an orthodontic wire is an important parameter in the design and use of an orthodontic appliance. It is the wire property that determines how much force an appliance can deliver. A bending test which allows direct measurement of the maximum bending moment was developed. Data produced from this test are independent of wire length and configuration. The maximum bending moment, percent recovery, and maximum springback were determined for round and rectangular cross sections of stainless steel, nickel-titanium, and beta-titanium wires. The data suggest the need for more specifically defining maximum moment and maximum springback. Three maximum bending moments are described: Me, My, and Mult. My and Mult are clinically the most significant. Appliances that are required to have no permanent deformation must operate below My. Appliances that exhibit marked permanent deformation may be used in some applications and, if so, higher bending moments can be produced. In order of magnitude, the maximum bending moment at yield is largest in stainless steel, beta-titanium, and nickel-titanium for a given cross section. Nickel-titanium and beta-titanium have significantly larger springback than stainless steel determined at the moment at yield. Nickel-titanium did not follow the theoretical ratio between ultimate bending moment and the bending moment at yield, exhibiting a very large ratio. The study supports the hypothesis that most orthodontic appliances are activated in a range where both plastic and elastic behavior occurs; therefore, the use of yield strengths for calculation of force magnitude can lead to a significant error in predicting the forces delivered. PMID:6576645

  6. Limitations to maximum running speed on flat curves.

    PubMed

    Chang, Young-Hui; Kram, Rodger

    2007-03-01

    Why is maximal running speed reduced on curved paths? The leading explanation proposes that an increase in lateral ground reaction force necessitates a decrease in peak vertical ground reaction force, assuming that maximum leg extension force is the limiting factor. Yet, no studies have directly measured these forces or tested this critical assumption. We measured maximum sprint velocities and ground reaction forces for five male humans sprinting along a straight track and compared them to sprints along circular tracks of 1, 2, 3, 4 and 6 m radii. Circular track sprint trials were performed either with or without a tether that applied centripetal force to the center of mass. Sprinters generated significantly smaller peak resultant ground reaction forces during normal curve sprinting compared to straight sprinting. This provides direct evidence against the idea that maximum leg extension force is always achieved and is the limiting factor. Use of the tether increased sprint speed, but not to expected values. During curve sprinting, the inside leg consistently generated smaller peak forces compared to the outside leg. Several competing biomechanical constraints placed on the stance leg during curve sprinting likely make the inside leg particularly ineffective at generating the ground reaction forces necessary to attain maximum velocities comparable to straight path sprinting. The ability of quadrupeds to redistribute function across multiple stance legs and decouple these multiple constraints may provide a distinct advantage for turning performance. PMID:17337710

  7. Limitations to maximum running speed on flat curves.

    PubMed

    Chang, Young-Hui; Kram, Rodger

    2007-03-01

    Why is maximal running speed reduced on curved paths? The leading explanation proposes that an increase in lateral ground reaction force necessitates a decrease in peak vertical ground reaction force, assuming that maximum leg extension force is the limiting factor. Yet, no studies have directly measured these forces or tested this critical assumption. We measured maximum sprint velocities and ground reaction forces for five male humans sprinting along a straight track and compared them to sprints along circular tracks of 1, 2, 3, 4 and 6 m radii. Circular track sprint trials were performed either with or without a tether that applied centripetal force to the center of mass. Sprinters generated significantly smaller peak resultant ground reaction forces during normal curve sprinting compared to straight sprinting. This provides direct evidence against the idea that maximum leg extension force is always achieved and is the limiting factor. Use of the tether increased sprint speed, but not to expected values. During curve sprinting, the inside leg consistently generated smaller peak forces compared to the outside leg. Several competing biomechanical constraints placed on the stance leg during curve sprinting likely make the inside leg particularly ineffective at generating the ground reaction forces necessary to attain maximum velocities comparable to straight path sprinting. The ability of quadrupeds to redistribute function across multiple stance legs and decouple these multiple constraints may provide a distinct advantage for turning performance.

  8. Geometric analysis of influence of fringe directions on phase sensitivities in fringe projection profilometry.

    PubMed

    Zhang, Ruihua; Guo, Hongwei; Asundi, Anand K

    2016-09-20

    In fringe projection profilometry, phase sensitivity is one of the important factors affecting measurement accuracy. A typical fringe projection system consists of one camera and one projector. To gain insight into its phase sensitivity, we perform in this paper a strict analysis in theory about the dependence of phase sensitivities on fringe directions. We use epipolar geometry as a tool to derive the relationship between fringe distortions and depth variations of the measured surface, and further formularize phase sensitivity as a function of the angle between fringe direction and the epipolar line. The results reveal that using the fringes perpendicular to the epipolar lines enables us to achieve the maximum phase sensitivities, whereas if the fringes have directions along the epipolar lines, the phase sensitivities decline to zero. Based on these results, we suggest the optimal fringes being circular-arc-shaped and centered at the epipole, which enables us to give the best phase sensitivities over the whole fringe pattern, and the quasi-optimal fringes, being straight and perpendicular to the connecting line between the fringe pattern center and the epipole, can achieve satisfyingly high phase sensitivities over whole fringe patterns in the situation that the epipole locates far away from the fringe pattern center. The experimental results demonstrate that our analyses are practical and correct, and that our optimized fringes are effective in improving the phase sensitivities and, further, the measurement accuracies. PMID:27661597

  9. Student Achievement and Motivation

    ERIC Educational Resources Information Center

    Flammer, Gordon H.; Mecham, Robert C.

    1974-01-01

    Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)

  10. North Atlantic Deep Water Production during the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Howe, Jacob N. W.; Piotrowski, Alexander M.; Noble, Taryn L.; Mulitza, Stefan; Chiessi, Cristiano M.; Bayon, Germain

    2016-06-01

    Changes in deep ocean ventilation are commonly invoked as the primary cause of lower glacial atmospheric CO2. The water mass structure of the glacial deep Atlantic Ocean and the mechanism by which it may have sequestered carbon remain elusive. Here we present neodymium isotope measurements from cores throughout the Atlantic that reveal glacial-interglacial changes in water mass distributions. These results demonstrate the sustained production of North Atlantic Deep Water under glacial conditions, indicating that southern-sourced waters were not as spatially extensive during the Last Glacial Maximum as previously believed. We demonstrate that the depleted glacial δ13C values in the deep Atlantic Ocean cannot be explained solely by water mass source changes. A greater amount of respired carbon, therefore, must have been stored in the abyssal Atlantic during the Last Glacial Maximum. We infer that this was achieved by a sluggish deep overturning cell, comprised of well-mixed northern- and southern-sourced waters.

  11. North Atlantic Deep Water Production during the Last Glacial Maximum.

    PubMed

    Howe, Jacob N W; Piotrowski, Alexander M; Noble, Taryn L; Mulitza, Stefan; Chiessi, Cristiano M; Bayon, Germain

    2016-01-01

    Changes in deep ocean ventilation are commonly invoked as the primary cause of lower glacial atmospheric CO2. The water mass structure of the glacial deep Atlantic Ocean and the mechanism by which it may have sequestered carbon remain elusive. Here we present neodymium isotope measurements from cores throughout the Atlantic that reveal glacial-interglacial changes in water mass distributions. These results demonstrate the sustained production of North Atlantic Deep Water under glacial conditions, indicating that southern-sourced waters were not as spatially extensive during the Last Glacial Maximum as previously believed. We demonstrate that the depleted glacial δ(13)C values in the deep Atlantic Ocean cannot be explained solely by water mass source changes. A greater amount of respired carbon, therefore, must have been stored in the abyssal Atlantic during the Last Glacial Maximum. We infer that this was achieved by a sluggish deep overturning cell, comprised of well-mixed northern- and southern-sourced waters. PMID:27256826

  12. North Atlantic Deep Water Production during the Last Glacial Maximum

    PubMed Central

    Howe, Jacob N. W.; Piotrowski, Alexander M.; Noble, Taryn L.; Mulitza, Stefan; Chiessi, Cristiano M.; Bayon, Germain

    2016-01-01

    Changes in deep ocean ventilation are commonly invoked as the primary cause of lower glacial atmospheric CO2. The water mass structure of the glacial deep Atlantic Ocean and the mechanism by which it may have sequestered carbon remain elusive. Here we present neodymium isotope measurements from cores throughout the Atlantic that reveal glacial–interglacial changes in water mass distributions. These results demonstrate the sustained production of North Atlantic Deep Water under glacial conditions, indicating that southern-sourced waters were not as spatially extensive during the Last Glacial Maximum as previously believed. We demonstrate that the depleted glacial δ13C values in the deep Atlantic Ocean cannot be explained solely by water mass source changes. A greater amount of respired carbon, therefore, must have been stored in the abyssal Atlantic during the Last Glacial Maximum. We infer that this was achieved by a sluggish deep overturning cell, comprised of well-mixed northern- and southern-sourced waters. PMID:27256826

  13. The analysis of spectra of novae taken near maximum

    NASA Technical Reports Server (NTRS)

    Stryker, L. L.; Hestand, J.; Starrfield, S.; Wehrse, R.; Hauschildt, P.; Spies, W.; Baschek, B.; Shaviv, G.

    1988-01-01

    A project to analyze ultraviolet spectra of novae obtained at or near maximum optical light is presented. These spectra are characterized by a relatively cool continuum with superimposed permitted emission lines from ions such as Fe II, Mg II, and Si II. Spectra obtained late in the outburst show only emission lines from highly ionized species and in many cases these are forbidden lines. The ultraviolet data will be used with calculations of spherical, expanding, stellar atmospheres for novae to determine elemental abundances by spectral line synthesis. This method is extremely sensitive to the abundances and completely independent of the nebular analyses usually used to obtain novae abundances.

  14. A novel optimal sensitivity design scheme for yarn tension sensor using surface acoustic wave device.

    PubMed

    Lei, Bingbing; Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Haoxin

    2014-08-01

    In this paper, we propose a novel optimal sensitivity design scheme for the yarn tension sensor using surface acoustic wave (SAW) device. In order to obtain the best sensitivity, the regression model between the size of the SAW yarn tension sensor substrate and the sensitivity of the SAW yarn tension sensor was established using the least square method. The model was validated too. Through analyzing the correspondence between the regression function monotonicity and its partial derivative sign, the effect of the SAW yarn tension sensor substrate size on the sensitivity of the SAW yarn tension sensor was investigated. Based on the regression model, a linear programming model was established to gain the optimal sensitivity of the SAW yarn tension sensor. The linear programming result shows that the maximum sensitivity will be achieved when the SAW yarn tension sensor substrate length is equal to 15 mm and its width is equal to 3mm within a fixed interval of the substrate size. An experiment of SAW yarn tension sensor about 15 mm long and 3mm wide was presented. Experimental results show that the maximum sensitivity 1982.39 Hz/g was accomplished, which confirms that the optimal sensitivity design scheme is useful and effective.

  15. A Novel Comb Architecture for Enhancing the Sensitivity of Bulk Mode Gyroscopes

    PubMed Central

    Elsayed, Mohannad Y.; Nabki, Frederic; El-Gamal, Mourad N.

    2013-01-01

    This work introduces a novel architecture for increasing the sensitivity of bulk mode gyroscopes. It is based on adding parallel plate comb drives to the points of maximum vibration amplitude, and tuning the stiffness of the combs. This increases the drive strength and results in a significant sensitivity improvement. The architecture is targeted for technologies with ∼100 nm transducer gaps in order to achieve very high performance devices. In this work, this sensitivity enhancement concept was implemented in SOIMUMPs, a commercial relatively large gap technology. Prototypes were measured to operate at frequencies of ∼1.5 MHz, with quality factors of ∼33,000, at a 10 mTorr vacuum level. Measurements using discrete electronics show a rate sensitivity of 0.31 μV/°/s, corresponding to a capacitance sensitivity of 0.43 aF/°/s/electrode, two orders of magnitude higher than a similar design without combs, fabricated in the same technology.

  16. Smart and Bored: Are We Failing Our High Achievers?

    ERIC Educational Resources Information Center

    Cleaver, Samantha

    2008-01-01

    Some high achievers are not as easy to engage. Sometimes motivating high achievers is "a matter of being more sensitive to what they are interested in," says Don Ambrose, a professor of education at Rider University in New Jersey. But too often classrooms are not set up for that kind of sensitivity. Research shows that schools are consistently…

  17. 44 CFR 321.4 - Achieving production readiness.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... achieve a capability for maximum production of “urgent” items during the initial phase of war, the... power, fuel, and water, or on long-distance communications; with spare replacements for...

  18. 44 CFR 321.4 - Achieving production readiness.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... achieve a capability for maximum production of “urgent” items during the initial phase of war, the... power, fuel, and water, or on long-distance communications; with spare replacements for...

  19. 44 CFR 321.4 - Achieving production readiness.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... achieve a capability for maximum production of “urgent” items during the initial phase of war, the... power, fuel, and water, or on long-distance communications; with spare replacements for...

  20. Maximum permissible voltage of YBCO coated conductors

    NASA Astrophysics Data System (ADS)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.

    2014-06-01

    Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  1. Cell Development obeys Maximum Fisher Information

    PubMed Central

    Frieden, B. Roy; Gatenby, Robert A.

    2014-01-01

    Eukaryotic cell development has been optimized by natural selection to obey maximal intracellular flux of messenger proteins. This, in turn, implies maximum Fisher information on angular position about a target nuclear pore complex (NPR). The cell is simply modeled as spherical, with cell membrane (CM) diameter 10μm and concentric nuclear membrane (NM) diameter 6μm. The NM contains ≈ 3000 nuclear pore complexes (NPCs). Development requires messenger ligands to travel from the CM-NPC-DNA target binding sites. Ligands acquire negative charge by phosphorylation, passing through the cytoplasm over Newtonian trajectories toward positively charged NPCs (utilizing positive nuclear localization sequences). The CM-NPC channel obeys maximized mean protein flux F and Fisher information I at the NPC, with first-order δI = 0 and approximate 2nd-order δ2I ≈ 0 stability to environmental perturbations. Many of its predictions are confirmed, including the dominance of protein pathways of from 1–4 proteins, a 4nm size for the EGFR protein and the flux value F ≈1016 proteins/m2-s. After entering the nucleus, each protein ultimately delivers its ligand information to a DNA target site with maximum probability, i.e. maximum Kullback-Liebler entropy HKL. In a smoothness limit HKL → IDNA/2, so that the total CM-NPC-DNA channel obeys maximum Fisher I. Thus maximum information → non-equilibrium, one condition for life. PMID:23747917

  2. Maximum magnitude earthquakes induced by fluid injection

    NASA Astrophysics Data System (ADS)

    McGarr, A.

    2014-02-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  3. Mapping the MPM maximum flow algorithm on GPUs

    NASA Astrophysics Data System (ADS)

    Solomon, Steven; Thulasiraman, Parimala

    2010-11-01

    The GPU offers a high degree of parallelism and computational power that developers can exploit for general purpose parallel applications. As a result, a significant level of interest has been directed towards GPUs in recent years. Regular applications, however, have traditionally been the focus of work on the GPU. Only very recently has there been a growing number of works exploring the potential of irregular applications on the GPU. We present a work that investigates the feasibility of Malhotra, Pramodh Kumar and Maheshwari's "MPM" maximum flow algorithm on the GPU that achieves an average speedup of 8 when compared to a sequential CPU implementation.

  4. Sensitive skin.

    PubMed

    Misery, L; Loser, K; Ständer, S

    2016-02-01

    Sensitive skin is a clinical condition defined by the self-reported facial presence of different sensory perceptions, including tightness, stinging, burning, tingling, pain and pruritus. Sensitive skin may occur in individuals with normal skin, with skin barrier disturbance, or as a part of the symptoms associated with facial dermatoses such as rosacea, atopic dermatitis and psoriasis. Although experimental studies are still pending, the symptoms of sensitive skin suggest the involvement of cutaneous nerve fibres and neuronal, as well as epidermal, thermochannels. Many individuals with sensitive skin report worsening symptoms due to environmental factors. It is thought that this might be attributed to the thermochannel TRPV1, as it typically responds to exogenous, endogenous, physical and chemical stimuli. Barrier disruptions and immune mechanisms may also be involved. This review summarizes current knowledge on the epidemiology, potential mechanisms, clinics and therapy of sensitive skin. PMID:26805416

  5. The Maximum Mass of Rotating Strange Stars

    NASA Astrophysics Data System (ADS)

    Szkudlarek, M.; Gondek-Rosiń; ska, D.; Villain, L.; Ansorg, M.

    2012-12-01

    Strange quark stars are considered as a possible alternative to neutron stars as compact objects (e.g. Weber 2003). A hot compact star (a proto-neutron star or a strange star) born in a supernova explosion or a remnant of neutron stars binary merger are expected to rotate differentially and be important sources of gravitational waves. We present results of the first relativistic calculations of differentially rotating strange quark stars for broad ranges of degree of differential rotation and maximum densities. Using a highly accurate, relativistic code we show that rotation may cause a significant increase of maximum allowed mass of strange stars, much larger than in the case of neutron stars with the same degree of differential rotation. Depending on the maximum allowed mass a massive neutron star (strange star) can be temporarily stabilized by differential rotation or collapse to a black hole.

  6. Surface tension maximum of liquid 3He

    NASA Astrophysics Data System (ADS)

    Matsumoto, Koichi; Hasegawa, Syuichi; Suzuki, Masaru; Okuda, Yuichi

    2000-07-01

    The surface tension of liquid 3He was measured using the capillary-rise method. Suzuki et al. have reported that its temperature dependence was almost quenched below 120 mK. Here we have examined it with higher precision and found that it has a small maximum around 100 mK. The amount of the maximum is about 3×10 -4 as a fraction of the surface tension at 0 K. The density of liquid 3He increases with temperature by about 5×10 -4 in Δ ρ/ ρ between 0 and 100 mK. This density change could be one of the reasons of the surface tension maximum around 100 mK.

  7. Maximum likelihood clustering with dependent feature trees

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.

  8. Enhanced sensitivity of surface plasmon resonance phase-interrogation biosensor by using oblique deposited silver nanorods

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Yi; Chen, Chih-Chia; Wu, Pin Chieh; Tseng, Ming Lun; Lin, Wen-Chi; Chen, Chih-Wei; Chiang, Hai-Pang

    2014-09-01

    Sensitivity of surface plasmon resonance phase-interrogation biosensor is demonstrated to be enhanced by oblique deposited silver nanorods. Silver nanorods are thermally deposited on silver nanothin film by oblique angle deposition (OAD). The length of the nanorods can be tuned by controlling the deposition parameters of thermal deposition. By measuring the phase difference between the p and s waves of surface plasmon resonance heterodyne interferometer with different wavelength of incident light, we have demonstrated that maximum sensitivity of glucose detection down to 7.1 × 10-8 refractive index units could be achieved with optimal deposition parameters of silver nanorods.

  9. Enhanced sensitivity of surface plasmon resonance phase-interrogation biosensor by using oblique deposited silver nanorods.

    PubMed

    Chung, Hung-Yi; Chen, Chih-Chia; Wu, Pin Chieh; Tseng, Ming Lun; Lin, Wen-Chi; Chen, Chih-Wei; Chiang, Hai-Pang

    2014-01-01

    Sensitivity of surface plasmon resonance phase-interrogation biosensor is demonstrated to be enhanced by oblique deposited silver nanorods. Silver nanorods are thermally deposited on silver nanothin film by oblique angle deposition (OAD). The length of the nanorods can be tuned by controlling the deposition parameters of thermal deposition. By measuring the phase difference between the p and s waves of surface plasmon resonance heterodyne interferometer with different wavelength of incident light, we have demonstrated that maximum sensitivity of glucose detection down to 7.1 × 10(-8) refractive index units could be achieved with optimal deposition parameters of silver nanorods.

  10. Climate Sensitivity

    SciTech Connect

    Lindzen, Richard

    2011-11-09

    Warming observed thus far is entirely consistent with low climate sensitivity. However, the result is ambiguous because the sources of climate change are numerous and poorly specified. Model predictions of substantial warming aredependent on positive feedbacks associated with upper level water vapor and clouds, but models are notably inadequate in dealing with clouds and the impacts of clouds and water vapor are intimately intertwined. Various approaches to measuring sensitivity based on the physics of the feedbacks will be described. The results thus far point to negative feedbacks. Problems with these approaches as well as problems with the concept of climate sensitivity will be described.

  11. Iowa Women of Achievement.

    ERIC Educational Resources Information Center

    Ohrn, Deborah Gore, Ed.

    1993-01-01

    This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…

  12. Achieving Peace through Education.

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…

  13. Increasing Male Academic Achievement

    ERIC Educational Resources Information Center

    Jackson, Barbara Talbert

    2008-01-01

    The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…

  14. Leadership Issues: Raising Achievement.

    ERIC Educational Resources Information Center

    Horsfall, Chris, Ed.

    This document contains five papers examining the meaning and operation of leadership as a variable affecting student achievement in further education colleges in the United Kingdom. "Introduction" (Chris Horsfall) discusses school effectiveness studies' findings regarding the relationship between leadership and effective schools, distinguishes…

  15. Achievements or Disasters?

    ERIC Educational Resources Information Center

    Goodwin, MacArthur

    2000-01-01

    Focuses on policy issues that have affected arts education in the twentieth century, such as: interest in discipline-based arts education, influence of national arts associations, and national standards and coordinated assessment. States that whether the policy decisions are viewed as achievements or disasters are for future determination. (CMK)

  16. Achieving True Consensus.

    ERIC Educational Resources Information Center

    Napier, Rod; Sanaghan, Patrick

    2002-01-01

    Uses the example of Vermont's Middlebury College to explore the challenges and possibilities of achieving consensus about institutional change. Discusses why, unlike in this example, consensus usually fails, and presents four demands of an effective consensus process. Includes a list of "test" questions on successful collaboration. (EV)

  17. School Students' Science Achievement

    ERIC Educational Resources Information Center

    Shymansky, James; Wang, Tzu-Ling; Annetta, Leonard; Everett, Susan; Yore, Larry D.

    2013-01-01

    This paper is a report of the impact of an externally funded, multiyear systemic reform project on students' science achievement on a modified version of the Third International Mathematics and Science Study (TIMSS) test in 33 small, rural school districts in two Midwest states. The systemic reform effort utilized a cascading leadership strategy…

  18. Essays on Educational Achievement

    ERIC Educational Resources Information Center

    Ampaabeng, Samuel Kofi

    2013-01-01

    This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…

  19. Assessing Handwriting Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Teachers in the school setting need to emphasize quality handwriting across the curriculum. Quality handwriting means that the written content is easy to read in either manuscript or cursive form. Handwriting achievement can be assessed, but not compared to the precision of assessing basic addition, subtraction, multiplication, and division facts.…

  20. Intelligence and Educational Achievement

    ERIC Educational Resources Information Center

    Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres

    2007-01-01

    This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…

  1. Explorations in achievement motivation

    NASA Technical Reports Server (NTRS)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  2. NCLB: Achievement Robin Hood?

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the potential "Robin Hood…

  3. Achieving All Our Ambitions

    ERIC Educational Resources Information Center

    Hartley, Tricia

    2009-01-01

    National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…

  4. INTELLIGENCE, PERSONALITY AND ACHIEVEMENT.

    ERIC Educational Resources Information Center

    MUIR, R.C.; AND OTHERS

    A LONGITUDINAL DEVELOPMENTAL STUDY OF A GROUP OF MIDDLE CLASS CHILDREN IS DESCRIBED, WITH EMPHASIS ON A SEGMENT OF THE RESEARCH INVESTIGATING THE RELATIONSHIP OF ACHIEVEMENT, INTELLIGENCE, AND EMOTIONAL DISTURBANCE. THE SUBJECTS WERE 105 CHILDREN AGED FIVE TO 6.3 ATTENDING TWO SCHOOLS IN MONTREAL. EACH CHILD WAS ASSESSED IN THE AREAS OF…

  5. SALT and Spelling Achievement.

    ERIC Educational Resources Information Center

    Nelson, Joan

    A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…

  6. Appraising Reading Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…

  7. 5 CFR 9701.312 - Maximum rates.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum rates. 9701.312 Section 9701.312 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN...

  8. Universality of efficiency at maximum power.

    PubMed

    Esposito, Massimiliano; Lindenberg, Katja; Van den Broeck, Christian

    2009-04-01

    We investigate the efficiency of power generation by thermochemical engines. For strong coupling between the particle and heat flows and in the presence of a left-right symmetry in the system, we demonstrate that the efficiency at maximum power displays universality up to quadratic order in the deviation from equilibrium. A maser model is presented to illustrate our argument.

  9. Teaching Media Studies in Maximum Security Prisons.

    ERIC Educational Resources Information Center

    Corcoran, Farrel

    Some of the difficulties involved in teaching inside maximum security prisons, and ways a media studies teacher met these challenges, are described in this paper. The first section of the paper deals with the prison security system and the stresses it can cause for both teacher and student, while the second section discusses the influence of the…

  10. Maximum phonation time: variability and reliability.

    PubMed

    Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W

    2010-05-01

    The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.

  11. 5 CFR 9701.312 - Maximum rates.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Maximum rates. 9701.312 Section 9701.312 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN...

  12. 5 CFR 9701.312 - Maximum rates.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Maximum rates. 9701.312 Section 9701.312 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN...

  13. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  14. Maximum entropy analysis of hydraulic pipe networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2014-12-01

    A Maximum Entropy (MaxEnt) method is developed to infer mean external and internal flow rates and mean pressure gradients (potential differences) in hydraulic pipe networks, without or with sufficient constraints to render the system deterministic. The proposed method substantially extends existing methods for the analysis of flow networks (e.g. Hardy-Cross), applicable only to deterministic networks.

  15. 5 CFR 9701.312 - Maximum rates.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Pay and Pay Administration Overview of Pay System § 9701.312 Maximum rates. (a) DHS may...

  16. Comparing maximum pressures in internal combustion engines

    NASA Technical Reports Server (NTRS)

    Sparrow, Stanwood W; Lee, Stephen M

    1922-01-01

    Thin metal diaphragms form a satisfactory means for comparing maximum pressures in internal combustion engines. The diaphragm is clamped between two metal washers in a spark plug shell and its thickness is chosen such that, when subjected to explosion pressure, the exposed portion will be sheared from the rim in a short time.

  17. Minimal length, Friedmann equations and maximum density

    NASA Astrophysics Data System (ADS)

    Awad, Adel; Ali, Ahmed Farag

    2014-06-01

    Inspired by Jacobson's thermodynamic approach [4], Cai et al. [5, 6] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation [6] of Friedmann equations to accommodate a general entrop-yarea law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p( ρ, a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p = ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  18. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... quantity of potable water, or an anticipated acute shortage or significant decline, cannot exceed $150,000... (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not to exceed $500,000 may be made to alleviate a significant decline in quantity or quality of...

  19. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... quantity of potable water, or an anticipated acute shortage or significant decline, cannot exceed $150,000... (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not to exceed $500,000 may be made to alleviate a significant decline in quantity or quality of...

  20. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... quantity of potable water, or an anticipated acute shortage or significant decline, cannot exceed $150,000... (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not to exceed $500,000 may be made to alleviate a significant decline in quantity or quality of...

  1. Maximum Possible Transverse Velocity in Special Relativity.

    ERIC Educational Resources Information Center

    Medhekar, Sarang

    1991-01-01

    Using a physical picture, an expression for the maximum possible transverse velocity and orientation required for that by a linear emitter in special theory of relativity has been derived. A differential calculus method is also used to derive the expression. (Author/KR)

  2. 24 CFR 200.15 - Maximum mortgage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Maximum mortgage. 200.15 Section 200.15 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  3. Maximum rotation frequency of strange stars

    SciTech Connect

    Zdunik, J.L.; Haensel, P. )

    1990-07-15

    Using the MIT bag model of strange-quark matter, we calculate the maximum angular frequency of the uniform rotation of strange stars. After studying a broad range of the MIT bag-model parameters, we obtain an upper bound of 12.3 kHz.

  4. Gluten Sensitivity

    MedlinePlus

    Gluten is a protein found in wheat, rye, and barley. It is found mainly in foods but ... products like medicines, vitamins, and supplements. People with gluten sensitivity have problems with gluten. It is different ...

  5. Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Hansen, J.

    2007-12-01

    Discussion of climate sensitivity requires careful definition of forcings, feedbacks and response times, indeed, foggy definitions have produced flawed assessments of climate sensitivity. The best information available on climate sensitivity comes from insightful interpretation of the Earth's history aided by quantitative information from climate models and understanding of climate processes. Climate sensitivity is a strong function of time scale, in part because of the nature of climate feedbacks. Unfortunately for humanity, the preponderance of feedbacks on the century time scale appears to be positive. The chief implication is the need for a sharp reversal in the trend of human-made climate forcing, if we are to avoid creating a planet that is dramatically different than the one on which civilization developed.

  6. Influence of maximum bite force on jaw movement during gummy jelly mastication.

    PubMed

    Kuninori, T; Tomonari, H; Uehara, S; Kitashima, F; Yagi, T; Miyawaki, S

    2014-05-01

    It is known that maximum bite force has various influences on chewing function; however, there have not been studies in which the relationships between maximum bite force and masticatory jaw movement have been clarified. The aim of this study was to investigate the effect of maximum bite force on masticatory jaw movement in subjects with normal occlusion. Thirty young adults (22 men and 8 women; mean age, 22.6 years) with good occlusion were divided into two groups based on whether they had a relatively high or low maximum bite force according to the median. The maximum bite force was determined according to the Dental Prescale System using pressure-sensitive sheets. Jaw movement during mastication of hard gummy jelly (each 5.5 g) on the preferred chewing side was recorded using a six degrees of freedom jaw movement recording system. The motion of the lower incisal point of the mandible was computed, and the mean values of 10 cycles (cycles 2-11) were calculated. A masticatory performance test was conducted using gummy jelly. Subjects with a lower maximum bite force showed increased maximum lateral amplitude, closing distance, width and closing angle; wider masticatory jaw movement; and significantly lower masticatory performance. However, no differences in the maximum vertical or maximum anteroposterior amplitudes were observed between the groups. Although other factors, such as individual morphology, may influence masticatory jaw movement, our results suggest that subjects with a lower maximum bite force show increased lateral jaw motion during mastication.

  7. Attribution of Annual Maximum Sea Levels to Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    Khouakhi, A.; Villarini, G.

    2015-12-01

    Tropical cyclones (TCs) can cause catastrophic storm surges with major social, economic, and ecological impacts in coastal areas. Understanding the contribution of TCs to extreme sea levels is therefore essential. In this work we examine the contribution of TCs to annual maximum sea levels at the global scale, including potential climate controls and temporal changes. Complete global coverage (1842-2014) of historical 6-hour best track TC records are obtained from the International Best Track Archive for Climate Stewardship (IBTrACS) data set. Hourly tide gauge data are obtained from the Joint Archive for Sea Level Research Quality Data Set. There are 177 tide gauge stations with at least 25 complete years of data between 1970 and 2014 (a complete year is defined as having more than 90% of all the hourly measurements in a year). We associate an annual maximum sea level at a given station with a TC if the center of circulation of the storm passed within a certain distance from the station within a given time window. Spatial and temporal sensitivity analyses are performed with varying time windows (6h, 12h) and buffer zones (200km and 500km) around the tide gauge stations. Results highlight large regional differences, with some locations experiencing almost ¾ of their annual maxima during the passage of a TC. The attribution of annual maximum sea level to TCs is particularly high along the coastal areas of the eastern United States, the Gulf of Mexico, China, Japan, Taiwan and Western Australia. Further analyses will examine the role played by El Niño - Southern Oscillation and the potential temporal changes in TC contributions to annual maximum sea levels.

  8. Project ACHIEVE final report

    SciTech Connect

    1997-06-13

    Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.

  9. Theoretical Analysis of Maximum Flow Declination Rate versus Maximum Area Declination Rate in Phonation

    ERIC Educational Resources Information Center

    Titze, Ingo R.

    2006-01-01

    Purpose: Maximum flow declination rate (MFDR) in the glottis is known to correlate strongly with vocal intensity in voicing. This declination, or negative slope on the glottal airflow waveform, is in part attributable to the maximum area declination rate (MADR) and in part to the overall inertia of the air column of the vocal tract (lungs to…

  10. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 11 2013-10-01 2013-10-01 false Minimum and maximum deposits; maximum time to deposit. 259.34 Section 259.34 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES...

  11. Developability assessment of clinical drug products with maximum absorbable doses.

    PubMed

    Ding, Xuan; Rose, John P; Van Gelder, Jan

    2012-05-10

    Maximum absorbable dose refers to the maximum amount of an orally administered drug that can be absorbed in the gastrointestinal tract. Maximum absorbable dose, or D(abs), has proved to be an important parameter for quantifying the absorption potential of drug candidates. The purpose of this work is to validate the use of D(abs) in a developability assessment context, and to establish appropriate protocol and interpretation criteria for this application. Three methods for calculating D(abs) were compared by assessing how well the methods predicted the absorption limit for a set of real clinical candidates. D(abs) was calculated for these clinical candidates by means of a simple equation and two computer simulation programs, GastroPlus and an program developed at Eli Lilly and Company. Results from single dose escalation studies in Phase I clinical trials were analyzed to identify the maximum absorbable doses for these compounds. Compared to the clinical results, the equation and both simulation programs provide conservative estimates of D(abs), but in general D(abs) from the computer simulations are more accurate, which may find obvious advantage for the simulations in developability assessment. Computer simulations also revealed the complex behavior associated with absorption saturation and suggested in most cases that the D(abs) limit is not likely to be achieved in a typical clinical dose range. On the basis of the validation findings, an approach is proposed for assessing absorption potential, and best practices are discussed for the use of D(abs) estimates to inform clinical formulation development strategies.

  12. The maximum intelligible range of the human voice

    NASA Astrophysics Data System (ADS)

    Boren, Braxton

    This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.

  13. Maximum independent set on diluted triangular lattices

    NASA Astrophysics Data System (ADS)

    Fay, C. W., IV; Liu, J. W.; Duxbury, P. M.

    2006-05-01

    Core percolation and maximum independent set on random graphs have recently been characterized using the methods of statistical physics. Here we present a statistical physics study of these problems on bond diluted triangular lattices. Core percolation critical behavior is found to be consistent with the standard percolation values, though there are strong finite size effects. A transfer matrix method is developed and applied to find accurate values of the density and degeneracy of the maximum independent set on lattices of limited width but large length. An extrapolation of these results to the infinite lattice limit yields high precision results, which are tabulated. These results are compared to results found using both vertex based and edge based local probability recursion algorithms, which have proven useful in the analysis of hard computational problems, such as the satisfiability problem.

  14. Maximum-entropy description of animal movement.

    PubMed

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  15. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  16. Zipf's law, power laws and maximum entropy

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  17. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  18. Pareto versus lognormal: A maximum entropy test

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  19. A Maximum Radius for Habitable Planets.

    PubMed

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope. PMID:26159097

  20. Evaluation of the Maximum Allowable Cost Program

    PubMed Central

    Lee, A. James; Hefner, Dennis; Dobson, Allen; Hardy, Ralph

    1983-01-01

    This article summarizes an evaluation of the Maximum Allowable Cost (MAC)-Estimated Acquisition Cost (EAC) program, the Federal Government's cost-containment program for prescription drugs.1 The MAC-EAC regulations which became effective on August 26, 1976, have four major components: (1) Maximum Allowable Cost reimbursement limits for selected multisource or generically available drugs; (2) Estimated Acquisition Cost reimbursement limits for all drugs; (3) “usual and customary” reimbursement limits for all drugs; and (4) a directive that professional fee studies be performed by each State. The study examines the benefits and costs of the MAC reimbursement limits for 15 dosage forms of five multisource drugs and EAC reimbursement limits for all drugs for five selected States as of 1979. PMID:10309857

  1. Pareto versus lognormal: a maximum entropy test.

    PubMed

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  2. Maximum hydrocarbon window determination in South Louisiana

    SciTech Connect

    Leach, W.G. )

    1993-03-29

    This is the third and final part of a three part article about the distribution of hydrocarbons in the Tertiary sands of South Louisiana. Based on many individual plots, it was found that hydrocarbon distribution will vary according to the depth of abnormal pressure and lithology. The relation of maximum hydrocarbon distribution to formation fracture strength or depth opens the door to the use of a maximum hydrocarbon window (MHW) technique. This MHW technique can be used as a decision making tool on how deep to drill a well, particularly how deep to drill a well below the top of abnormal pressure. The paper describes the benefits of the MHW technique and its future potential for exploration and development operations.

  3. A Maximum Radius for Habitable Planets.

    PubMed

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  4. MAXIMUM LIKELIHOOD ESTIMATION FOR SOCIAL NETWORK DYNAMICS

    PubMed Central

    Snijders, Tom A.B.; Koskinen, Johan; Schweinberger, Michael

    2014-01-01

    A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The model for tie changes is parametric and designed for applications to social network analysis, where the network dynamics can be interpreted as being generated by choices made by the social actors represented by the nodes of the graph. An algorithm for calculating the Maximum Likelihood estimator is presented, based on data augmentation and stochastic approximation. An application to an evolving friendship network is given and a small simulation study is presented which suggests that for small data sets the Maximum Likelihood estimator is more efficient than the earlier proposed Method of Moments estimator. PMID:25419259

  5. 5 CFR 534.203 - Maximum stipends.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... training program Maximums by grade and step 1 L-A Below high school graduation GS-1-1 (minus 3 steps). L-1... year postgraduate predoctoral GS-7-1 (minus 3 steps). L-6 Third year medical school GS-7-1 (minus 3 steps). L-7 Third year postgraduate predoctoral GS-9-1 (minus 3 steps). L-7 Fourth year medical...

  6. 5 CFR 534.203 - Maximum stipends.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... training program Maximums by grade and step 1 L-A Below high school graduation GS-1-1 (minus 3 steps). L-1... year postgraduate predoctoral GS-7-1 (minus 3 steps). L-6 Third year medical school GS-7-1 (minus 3 steps). L-7 Third year postgraduate predoctoral GS-9-1 (minus 3 steps). L-7 Fourth year medical...

  7. 5 CFR 534.203 - Maximum stipends.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... training program Maximums by grade and step 1 L-A Below high school graduation GS-1-1 (minus 3 steps). L-1... year postgraduate predoctoral GS-7-1 (minus 3 steps). L-6 Third year medical school GS-7-1 (minus 3 steps). L-7 Third year postgraduate predoctoral GS-9-1 (minus 3 steps). L-7 Fourth year medical...

  8. 5 CFR 534.203 - Maximum stipends.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... training program Maximums by grade and step 1 L-A Below high school graduation GS-1-1 (minus 3 steps). L-1... year postgraduate predoctoral GS-7-1 (minus 3 steps). L-6 Third year medical school GS-7-1 (minus 3 steps). L-7 Third year postgraduate predoctoral GS-9-1 (minus 3 steps). L-7 Fourth year medical...

  9. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-01

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature. PMID:21230944

  10. Maximum privacy without coherence, zero-error

    NASA Astrophysics Data System (ADS)

    Leung, Debbie; Yu, Nengkun

    2016-09-01

    We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.

  11. Tissue Radiation Response with Maximum Tsallis Entropy

    SciTech Connect

    Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  12. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Schick, L. H.

    This volume contains a selection of papers presented at the Tenth Annual Workshop on Maximum Entropy and Bayesian Methods. The thirty-six papers included cover a wide range of applications in areas such as economics and econometrics, astronomy and astrophysics, general physics, complex systems, image reconstruction, and probability and mathematics. Together they give an excellent state-of-the-art overview of fundamental methods of data analysis.

  13. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-01

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  14. Maximum-biomass prediction of homofermentative Lactobacillus.

    PubMed

    Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei

    2016-07-01

    Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C. PMID:26896862

  15. Maximum saliency bias in binocular fusion

    NASA Astrophysics Data System (ADS)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  16. "SPURS" in the North Atlantic Salinity Maximum

    NASA Astrophysics Data System (ADS)

    Schmitt, Raymond

    2014-05-01

    The North Atlantic Salinity Maximum is the world's saltiest open ocean salinity maximum and was the focus of the recent Salinity Processes Upper-ocean Regional Study (SPURS) program. SPURS was a joint venture between US, French, Irish, and Spanish investigators. Three US and two EU cruises were involved from August, 1012 - October, 2013 as well as surface moorings, glider, drifter and float deployments. Shipboard operations included underway meteorological and oceanic data, hydrographic surveys and turbulence profiling. The goal is to improve our understanding of how the salinity maximum is maintained and how it may be changing. It is formed by an excess of evaporation over precipitation and the wind-driven convergence of the subtropical gyre. Such salty areas are getting saltier with global warming (a record high SSS was observed in SPURS) and it is imperative to determine the relative roles of surface water fluxes and oceanic processes in such trends. The combination of accurate surface flux estimates with new assessments of vertical and horizontal mixing in the ocean will help elucidate the utility of ocean salinity in quantifying the changing global water cycle.

  17. Does achievement motivation mediate the semantic achievement priming effect?

    PubMed

    Engeser, Stefan; Baumann, Nicola

    2014-10-01

    The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies. PMID:24820250

  18. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  19. Spin-torque diode with tunable sensitivity and bandwidth by out-of-plane magnetic field

    NASA Astrophysics Data System (ADS)

    Li, X.; Zheng, C.; Zhou, Y.; Kubota, H.; Yuasa, S.; Pong, Philip W. T.

    2016-06-01

    Spin-torque diodes based on nanosized magnetic tunnel junctions are novel microwave detectors with high sensitivity and wide frequency bandwidth. While previous reports mainly focus on improving the sensitivity, the approaches to extend the bandwidth are limited. This work experimentally demonstrates that through optimizing the orientation of the external magnetic field, wide bandwidth can be achieved while maintaining high sensitivity. The mechanism of the frequency- and sensitivity-tuning is investigated through analyzing the dependence of resonant frequency and DC voltage on the magnitude and the tilt angle of hard-plane magnetic field. The frequency dependence is qualitatively explicated by Kittel's ferromagnetic resonance model. The asymmetric resonant frequency at positive and negative magnetic field is verified by the numerical simulation considering the in-plane anisotropy. The DC voltage dependence is interpreted through evaluating the misalignment angle between the magnetization of the free layer and the reference layer. The tunability of the detector performance by the magnetic field angle is evaluated through characterizing the sensitivity and bandwidth under 3D magnetic field. The frequency bandwidth up to 9.8 GHz or maximum sensitivity up to 154 mV/mW (after impedance mismatch correction) can be achieved by tuning the angle of the applied magnetic field. The results show that the bandwidth and sensitivity can be controlled and adjusted through optimizing the orientation of the magnetic field for various applications and requirements.

  20. Estimating landscape carrying capacity through maximum clique analysis.

    PubMed

    Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H

    2012-12-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be

  1. Climatic controls of western U.S. glaciers at the last glacial maximum

    USGS Publications Warehouse

    Hostetler, S.W.; Clark, P.U.

    1997-01-01

    We use a nested atmospheric modeling strategy to simulate precipitation and temperature of the western United States 18,000 years ago (18 ka). The high resolution of the nested model allows us to isolate the regional structure of summer temperature and winter precipitation that is crucial to determination of the net mass balance of late-Pleistocene mountain glaciers in this region of diverse topography and climate. Modeling results suggest that climatic controls of these glaciers varied significantly over the western U.S. Glaciers in the northern Rocky Mountains existed under relatively cold July temperatures and low winter accumulation, reflecting anticyclonic, easterly wind flow off the Laurentide Ice Sheet. In contrast, glaciers that existed under relatively warmer and wetter conditions are located along the Pacific coast south of Oregon, where enhanced westerlies delivered higher precipitation than at present. Between these two groupings lie glaciers that were controlled by a mix of cold and wet conditions attributed to the convergence of cold air from the ice sheet and moisture derived from the westerlies. Sensitivity tests suggest that, for our simulated 18 ka climate, many of the glaciers exhibit a variable response to climate but were generally more sensitive to changes in temperature than to changes in precipitation, particularly those glaciers in central Idaho and the Yellowstone Plateau. Our results support arguments that temperature depression generally played a larger role in lowering equilibrium line altitudes in the western U.S. during the last glacial maximum than did increased precipitation, although the magnitude of temperature depression required for steady-state mass balance varied from 8-18??C. Only the Sierra Nevada glaciers required a substantial increase in precipitation to achieve steady-state mass balance, while glaciers in the Cascade Range existed with decreased precipitation.

  2. Climatic controls of Western U.S. Glaciers at the last glacial maximum

    NASA Astrophysics Data System (ADS)

    Hostetler, Steven W.; Clark, Peter U.

    We use a nested atmospheric modeling strategy to simulate precipitation and temperature of the western United States 18,000 years ago (18 ka). The high resolution of the nested model allows us to isolate the regional structure of summer temperature and winter precipitation that is crucial to determination of the net mass balance of late-Pleistocene mountain glaciers in this region of diverse topography and climate. Modeling results suggest that climatic controls of these glaciers varied significantly over the western U.S. Glaciers in the northern Rocky Mountains existed under relatively cold July temperatures and low winter accumulation, reflecting anticyclonic, easterly wind flow off the Laurentide Ice Sheet. In contrast, glaciers that existed under relatively warmer and wetter conditions are located along the Pacific coast south of Oregon, where enhanced westerlies delivered higher precipitation than at present. Between these two groupings lie glaciers that were controlled by a mix of cold and wet conditions attributed to the convergence of cold air from the ice sheet and moisture derived from the westerlies. Sensitivity tests suggest that, for our simulated 18 ka climate, many of the glaciers exhibit a variable response to climate but were generally more sensitive to changes in temperature than to changes in precipitation, particularly those glaciers in central Idaho and the Yellowstone Plateau. Our results support arguments that temperature depression generally played a larger role in lowering equilibrium line altitudes in the western U.S. during the last glacial maximum than did increased precipitation, although the magnitude of temperature depression required for steady-state mass balance varied from 8-18°C. Only the Sierra Nevada glaciers required a substantial increase in precipitation to achieve steady-state mass balance, while glaciers in the Cascade Range existed with decreased precipitation.

  3. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  4. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  5. Multiperiod Maximum Loss is time unit invariant.

    PubMed

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant. PMID:27563531

  6. Maximum profit performance of an absorption refrigerator

    SciTech Connect

    Chen, L.; Sun, F.; Wu, C.

    1996-12-01

    The operation of an absorption refrigerator is viewed as a production process with exergy as its output. The relations between the optimal profit and COP (coefficient of performance), and the COP bound at the maximum profit of the refrigerator are derived based on a general heat transfer law. The results provide a theoretical basis for developing and utilizing a variety of absorption refrigerators. The focus of this paper is to search the compromise optimization between economics (profit) and the utilization factor (COP) for finite-time endoreversible thermodynamic cycles.

  7. Maximum Temperature Detection System for Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  8. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  9. Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Wells, W. R.

    1979-01-01

    A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data

  10. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  11. Achieving closure at Fernald

    SciTech Connect

    Bradburne, John; Patton, Tisha C.

    2001-02-25

    When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.

  12. Coexistence of positive and negative refractive index sensitivity in the liquid-core photonic crystal fiber based plasmonic sensor.

    PubMed

    Shuai, Binbin; Xia, Li; Liu, Deming

    2012-11-01

    We present and numerically characterize a liquid-core photonic crystal fiber based plasmonic sensor. The coupling properties and sensing performance are investigated by the finite element method. It is found that not only the plasmonic mode dispersion relation but also the fundamental mode dispersion relation is rather sensitive to the analyte refractive index (RI). The positive and negative RI sensitivity coexist in the proposed design. It features a positive RI sensitivity when the increment of the SPP mode effective index is larger than that of the fundamental mode, but the sensor shows a negative RI sensitivity once the increment of the fundamental mode gets larger. A maximum negative RI sensitivity of -5500nm/RIU (Refractive Index Unit) is achieved in the sensing range of 1.50-1.53. The effects of the structural parameters on the plasmonic excitations are also studied, with a view of tuning and optimizing the resonant spectrum. PMID:23187403

  13. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    NASA Astrophysics Data System (ADS)

    Dai, Qi; Hao, Luoxi; Lin, Yi; Cui, Zhe

    2016-02-01

    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330-390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions.

  14. Maximum neighborhood margin criterion in face recognition

    NASA Astrophysics Data System (ADS)

    Han, Pang Ying; Teoh, Andrew Beng Jin

    2009-04-01

    Feature extraction is a data analysis technique devoted to removing redundancy and extracting the most discriminative information. In face recognition, feature extractors are normally plagued with small sample size problems, in which the total number of training images is much smaller than the image dimensionality. Recently, an optimized facial feature extractor, maximum marginal criterion (MMC), was proposed. MMC computes an optimized projection by solving the generalized eigenvalue problem in a standard form that is free from inverse matrix operation, and thus it does not suffer from the small sample size problem. However, MMC is essentially a linear projection technique that relies on facial image pixel intensity to compute within- and between-class scatters. The nonlinear nature of faces restricts the discrimination of MMC. Hence, we propose an improved MMC, namely maximum neighborhood margin criterion (MNMC). Unlike MMC, which preserves global geometric structures that do not perfectly describe the underlying face manifold, MNMC seeks a projection that preserves local geometric structures via neighborhood preservation. This objective function leads to the enhancement of classification capability, and this is testified by experimental results. MNMC shows its performance superiority compared to MMC, especially in pose, illumination, and expression (PIE) and face recognition grand challenge (FRGC) databases.

  15. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  16. Probably maximum flood of the Sava River

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Raj, Mojca Å.

    2010-05-01

    The Nuclear Power Plant Krško (NEK) situated on the left bank of the Save River close to the border of Croatia. Probably Maximum Flood, on the location of the NEK could result in combination of probably maximum precipitation, sequential storm before PMP or snowmelt on the Sava River watershed. Mediterranean climate characterises very high precipitation and temporary high snow pack. The HBV-96 model as Integrated Hydrological Modelling System (IHMS) used for modelling. Model was calibrated and verification for daily time step at first for time period 1190-2006. Calibration and verification for hourly time step was done for period 1998-1999. The stream routing parameters were calibrated for flood event in years 1998 and 2007 and than verification for flood event in 1990. Discharge routing data analysis shown that possible inundation of Ljubljana and Savinja valley was not properly estimated. The flood areas are protected with levees and water does not spread over flooded areas in events used for calibration. Inundated areas in Ljubljana valley and Savinja valley are protected by levees and model could not simulate properly inundation of PMF. We recalibrate parameters controlled inundation on those areas for the worst scenario. Calculated PMF values drop down tramendosly after recalibration.

  17. Maximum Correntropy Criterion for Robust Face Recognition.

    PubMed

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang

    2011-08-01

    In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

  18. Effect of fiber and matrix maximum strain on the energy absorption of composite materials

    NASA Technical Reports Server (NTRS)

    Farley, G. L.

    1985-01-01

    Static crushing tests were conducted on graphite composite tubes to examine the influence of fiber and matrix maximum strain at failure on the energy absorption capability of graphite reinforced composite material. Fiber and matrix maximum strain at failure were determined to significantly effect energy absorption. The higher strain at failure composite material system, AS-4/5245, exhibited superior energy absorption capability compared to AS-4/934, T300/5245 or T300/934 composite material. Results of this investigation suggest that to achieve maximum energy absorption from a composite material a matrix material that has a higher strain at failure than the fiber reinforcement should be used.

  19. Achievement Goals and Achievement Emotions: A Meta-Analysis

    ERIC Educational Resources Information Center

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  20. Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

    SciTech Connect

    Nix, D.A.; Hogden, J.E.

    1998-12-01

    The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.

  1. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  2. A maximum likelihood approach to estimating correlation functions

    SciTech Connect

    Baxter, Eric Jones; Rozo, Eduardo

    2013-12-10

    We define a maximum likelihood (ML for short) estimator for the correlation function, ξ, that uses the same pair counting observables (D, R, DD, DR, RR) as the standard Landy and Szalay (LS for short) estimator. The ML estimator outperforms the LS estimator in that it results in smaller measurement errors at any fixed random point density. Put another way, the ML estimator can reach the same precision as the LS estimator with a significantly smaller random point catalog. Moreover, these gains are achieved without significantly increasing the computational requirements for estimating ξ. We quantify the relative improvement of the ML estimator over the LS estimator and discuss the regimes under which these improvements are most significant. We present a short guide on how to implement the ML estimator and emphasize that the code alterations required to switch from an LS to an ML estimator are minimal.

  3. Approximate maximum likelihood estimation of scanning observer templates

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.

    2015-03-01

    In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.

  4. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  5. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  6. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  7. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  8. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Maximum passenger seating configuration. 23... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum passenger seating configuration must be established....

  9. Measurement and relevance of maximum metabolic rate in fishes.

    PubMed

    Norin, T; Clark, T D

    2016-01-01

    Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (

  10. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    SciTech Connect

    Virgo, Nathaniel Ikegami, Takashi

    2014-12-05

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  11. Entrepreneur achievement. Liaoning province.

    PubMed

    Zhao, R

    1994-03-01

    This paper reports the successful entrepreneurial endeavors of members of a 20-person women's group in Liaoning Province, China. Jing Yuhong, a member of the Family Planning Association at Shileizi Village, Dalian City, provided the basis for their achievements by first building an entertainment/study room in her home to encourage married women to learn family planning. Once stocked with books, magazines, pamphlets, and other materials on family planning and agricultural technology, dozens of married women in the neighborhood flocked voluntarily to the room. Yuhong also set out to give these women a way to earn their own income as a means of helping then gain greater equality with their husbands and exert greater control over their personal reproductive and social lives. She gave a section of her farming land to the women's group, loaned approximately US$5200 to group members to help them generate income from small business initiatives, built a livestock shed in her garden for the group to raise marmots, and erected an awning behind her house under which mushrooms could be grown. The investment yielded $12,000 in the first year, allowing each woman to keep more than $520 in dividends. Members then soon began going to fairs in the capital and other places to learn about the outside world, and have successfully ventured out on their own to generate individual incomes. Ten out of twenty women engaged in these income-generating activities asked for and got the one-child certificate.

  12. Characterizing Local Optima for Maximum Parsimony.

    PubMed

    Urheim, Ellen; Ford, Eric; St John, Katherine

    2016-05-01

    Finding the best phylogenetic tree under the maximum parsimony optimality criterion is computationally difficult. We quantify the occurrence of such optima for well-behaved sets of data. When nearest neighbor interchange operations are used, multiple local optima can occur even for "perfect" sequence data, which results in hill-climbing searches that never reach a global optimum. In contrast, we show that when neighbors are defined via the subtree prune and regraft metric, there is a single local optimum for perfect sequence data, and thus, every such search finds a global optimum quickly. We further characterize conditions for which sequences simulated under the Cavender-Farris-Neyman and Jukes-Cantor models of evolution yield well-behaved search spaces. PMID:27234257

  13. The 1989 Solar Maximum Mission event list

    NASA Technical Reports Server (NTRS)

    Dennis, B. R.; Licata, J. P.; Tolbert, A. K.

    1992-01-01

    This document contains information on solar burst and transient activity observed by the Solar Maximum Mission (SMM) during 1989 pointed observations. Data from the following SMM experiments are included: (1) Gamma Ray Spectrometer, (2) Hard X-Ray Burst Spectrometer, (3) Flat Crystal Spectrometer, (4) Bent Crystal Spectrometer, (5) Ultraviolet Spectrometer Polarimeter, and (6) Coronagraph/Polarimeter. Correlative optical, radio, and Geostationary Operational Satellite (GOES) X-ray data are also presented. Where possible, bursts or transients observed in the various wavelengths were grouped into discrete flare events identified by unique event numbers. Each event carries a qualifier denoting the quality or completeness of the observations. Spacecraft pointing coordinates and flare site angular displacement values from sun center are also included.

  14. The 1980 solar maximum mission event listing

    SciTech Connect

    Speich, D.M.; Nelson, J.J.; Licata, J.P.; Tolbert, A.K.

    1991-06-01

    Information is contained on solar burst and transient activity observed by the Solar Maximum Mission (SMM) during 1980 pointed observations. Data from the following SMM experiments are included: (1) Gamma Ray Spectrometer, (2) Hard X-Ray Burst Spectrometer, (3) Hard X-Ray Imaging Spectrometer, (4) Flat Crystal Spectrometer, (5) Bent Crystal Spectrometer, (6) Ultraviolet Spectrometer and Polarimeter, and (7) Coronagraph/Polarimeter. Correlative optical, radio, and Geostationary Operational Environmental Satellite (GOES) x-ray data are also presented. Where possible, bursts or transients observed in the various wavelengths were grouped into discrete flare events identified by unique event numbers. Each event carries a qualifier denoting the quality or completeness of the observations. Spacecraft pointing coordinates and flare site angular displacement values from Sun center are also included.

  15. Maximum terminal velocity turns at constant altitude

    NASA Astrophysics Data System (ADS)

    Eisler, G. Richard; Hull, David G.

    An optimal control problem is formulated for a maneuvering reentry vehicle to execute a maximum terminal velocity turn at constant altitude to a fixed final position. A control solution technique is devised which uses a Newton scheme to repetitively solve a nonlinear algebraic system for two parameters to provide the on-line guidance. The turn control takes advantage of the high dynamic pressure at the beginning of the flight path; the lift solution acts to null deviations from the prescribed altitude. Control solutions are compared for a continuously updated, approximate physical model, for a simulation of the approximate optimal guidance in a true physical model, and for a parameter optimization solution for the true model. End constraint satisfaction is excellent. Overall trajectory agreement is good, if the assumed atmospheric model is reasonably accurate.

  16. Maximum terminal velocity turns at constant altitude

    SciTech Connect

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    An optimal control problem is formulated for a maneuvering reentry vehicle to execute a maximum terminal velocity turn at constant altitude to a fixed final position. A control solution technique is devised which uses a Newton scheme to repetitively solve a nonlinear algebraic system for two parameters to provide the on-line guidance. The turn control takes advantage of the high dynamic pressure at the beginning of the flight path; the lift solution acts to null deviations from the prescribed altitude. Control solutions are compared for a continuously updated, approximate physical model, for a simulation of the approximate optimal guidance in a true physical model, and for a parameter optimization solution for the true model. End constraint satisfaction is excellent. Overall trajectory agreement is good, if the assumed atmospheric model is reasonably accurate.

  17. The 1988 Solar Maximum Mission event list

    NASA Technical Reports Server (NTRS)

    Dennis, B. R.; Licata, J. P.; Tolbert, A. K.

    1992-01-01

    Information on solar burst and transient activity observed by the Solar Maximum Mission (SMM) during 1988 pointed observations is presented. Data from the following SMM experiments are included: (1) gamma ray spectrometer; (2) hard x ray burst spectrometer; (3) flat crystal spectrometers; (4) bent crystal spectrometer; (5) ultraviolet spectrometer polarimeter; and (6) coronagraph/polarimeter. Correlative optical, radio, and Geostationary Operational Environmental Satellite (GOES) x ray data are also presented. Where possible, bursts, or transients observed in the various wavelengths were grouped into discrete flare events identified by unique event numbers. Each event carries a qualifier denoting the quality or completeness of the observation. Spacecraft pointing coordinates and flare site angular displacement values from sun center are also included.

  18. Experimental shock metamorphism of maximum microcline

    NASA Technical Reports Server (NTRS)

    Robertson, P. B.

    1975-01-01

    A series of recovery experiments are conducted to study the behavior of single-crystal perthitic maximum microcline shock-loaded to a peak pressure of 417 kbar. Microcline is found to deform in a manner similar to quartz and other alkali feldspars. It is observed that shock-induced cleavages occur initially at or slightly below the Hugoniot elastic limit (60-85 kbar), that shock-induced rather than thermal disordering begins above the Hugoniot elastic limit, and that all types of planar elements form parallel to crystallographic planes of low Miller indices. When increasing pressure, it is found that bulk density, refractive indices, and birefringence of the recovered material decrease and approach diaplectic glass values, whereas disappearance and weakening of reflections in Debye-Sherrer patterns are due to disordering of the feldspar lattice.

  19. Quantum gravity momentum representation and maximum energy

    NASA Astrophysics Data System (ADS)

    Moffat, J. W.

    2016-11-01

    We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.

  20. Maximum entropy principle and relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    van Weert, Ch. G.

    1982-04-01

    A relativistic theory of hydrodynamics applicable beyond the hydrodynamic regime is developed on the basis of the maximum entropy principle. This allows the construction of a unique statistical operator representing the state of the system as specified by the values of the hydrodynamical densities. Special attention is paid to the thermodynamic limit and the virial theorem which leads to an expression for the pressure in terms of the field-theoretic energymomentum tensor of Coleman and Jackiw. It is argued that outside the hydrodynamic regime the notion of a local Gibbs relation, as usually postulated, must be abandoned in general. In the nontext of the linear approximation, the memory-retaining and non-local generalizations of the relativistic Navier-Stokes equations are derived from the underlying Heisenberg equations of motion. The formal similarity to the Zwanzig-Mori description of non-relativistic fluids is expounded.

  1. Maximum efficiency of the collisional Penrose process

    NASA Astrophysics Data System (ADS)

    Zaslavskii, O. B.

    2016-09-01

    We consider the collision of two particles that move in the equatorial plane near a general stationary rotating axially symmetric extremal black hole. One of the particles is critical (with fine-tuned parameters) and moves in the outward direction. The second particle (usual, not fine-tuned) comes from infinity. We examine the efficiency η of the collisional Penrose process. There are two relevant cases here: a particle falling into a black hole after collision (i) is heavy or (ii) has a finite mass. We show that the maximum of η in case (ii) is less than or equal to that in case (i). It is argued that for superheavy particles, the bound applies to nonequatorial motion as well. As an example, we analyze collision in the Kerr-Newman background. When the bound is the same for processes (i) and (ii), η =3 for this metric. For the Kerr black hole, recent results in the literature are reproduced.

  2. A general optimization for maximum terminal velocity

    NASA Astrophysics Data System (ADS)

    Vulpetti, G.

    1982-09-01

    A numerical model is developed to determine the maximum velocity which can be attained by a rocket propulsion system. Particular attention is given to the ratio of active mass, that which can be converted to propulsive energy, to inert mass, which remains after the propulsive energy is expended. Calculations are based on the law of conservation of energy applied to a spaceship with chemical, laser-sail, interstellar ramjet, and annihilation engines. Limits on the exhaust velocity of the thrust system are neglected. Specific attention is given to relativistic calculations involving the annihilation reactions, noting that classical propulsion systems have critical mass values significantly lower than the propulsion required by extra-solar system flight. Numerical results are presented of critical values of propellant which produce an optimal jet speed, which is determined to be a constant.

  3. Maximum entropy model for business cycle synchronization

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui

    2014-11-01

    The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.

  4. Diffusivity Maximum in a Reentrant Nematic Phase

    PubMed Central

    Stieger, Tillmann; Mazza, Marco G.; Schoen, Martin

    2012-01-01

    We report molecular dynamics simulations of confined liquid crystals using the Gay–Berne–Kihara model. Upon isobaric cooling, the standard sequence of isotropic–nematic–smectic A phase transitions is found. Upon further cooling a reentrant nematic phase occurs. We investigate the temperature dependence of the self-diffusion coefficient of the fluid in the nematic, smectic and reentrant nematic phases. We find a maximum in diffusivity upon isobaric cooling. Diffusion increases dramatically in the reentrant phase due to the high orientational molecular order. As the temperature is lowered, the diffusion coefficient follows an Arrhenius behavior. The activation energy of the reentrant phase is found in reasonable agreement with the reported experimental data. We discuss how repulsive interactions may be the underlying mechanism that could explain the occurrence of reentrant nematic behavior for polar and non-polar molecules. PMID:22837730

  5. Relative azimuth inversion by way of damped maximum correlation estimates

    USGS Publications Warehouse

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  6. Pulmonary haemodynamics during recovery from maximum incremental cycling exercise.

    PubMed

    Oliveira, Rudolf K F; Waxman, Aaron B; Agarwal, Manyoo; Badr Eslam, Roza; Systrom, David M

    2016-07-01

    Assessment of cardiac function during exercise can be technically demanding, making the recovery period a potentially attractive diagnostic window. However, the validity of this approach for exercise pulmonary haemodynamics has not been validated.The present study, therefore, evaluated directly measured pulmonary haemodynamics during 2-min recovery after maximum invasive cardiopulmonary exercise testing in patients evaluated for unexplained exertional intolerance. Based on peak exercise criteria, patients with exercise pulmonary hypertension (ePH; n=36), exercise pulmonary venous hypertension (ePVH; n=28) and age-matched controls (n=31) were analysed.By 2-min recovery, 83% (n=30) of ePH patients had a mean pulmonary artery pressure (mPAP) <30 mmHg and 96% (n=27) of ePVH patients had a pulmonary arterial wedge pressure (PAWP) <20 mmHg. Sensitivity of pulmonary hypertension-related haemodynamic measurements during recovery for ePH and ePVH diagnosis was ≤25%. In ePVH, pulmonary vascular compliance (PVC) returned to its resting value by 1-min recovery, while in ePH, elevated pulmonary vascular resistance (PVR) and decreased PVC persisted throughout recovery.In conclusion, we observed that mPAP and PAWP decay quickly during recovery in ePH and ePVH, compromising the sensitivity of recovery haemodynamic measurements in diagnosing pulmonary hypertension. ePH and ePVH had different PVR and PVC recovery patterns, suggesting differences in the underlying pulmonary hypertension pathophysiology. PMID:27126692

  7. Optimizing the design and analysis of cryogenic semiconductor dark matter detectors for maximum sensitivity

    SciTech Connect

    Pyle, Matt Christopher

    2012-01-01

    In this thesis, we illustrate how the complex E- field geometry produced by interdigitated electrodes at alternating voltage biases naturally encodes 3D fiducial volume information into the charge and phonon signals and thus is a natural geometry for our next generation dark matter detectors. Secondly, we will study in depth the physics of import to our devices including transition edge sensor dynamics, quasi- particle dynamics in our Al collection fins, and phonon physics in the crystal itself so that we can both understand the performance of our previous CDMS II device as well as optimize the design of our future devices. Of interest to the broader physics community is the derivation of the ideal athermal phonon detector resolution and it's T3 c scaling behavior which suggests that the athermal phonon detector technology developed by CDMS could also be used to discover coherent neutrino scattering and search for non-standard neutrino interaction and sterile neutrinos. These proposed resolution optimized devices can also be used in searches for exotic MeV-GeV dark matter as well as novel background free searches for 8GeV light WIMPs.

  8. Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Schmittner, A.; Urban, N.; Shakun, J. D.; Mahowald, N. M.; Clark, P. U.; Bartlein, P. J.; Mix, A. C.; Rosell-Melé, A.

    2011-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.

  9. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  10. The Homogeneity of School Achievement.

    ERIC Educational Resources Information Center

    Cahan, Sorel

    Since the measurement of school achievement involves the administration of achievement tests to various grades on various subjects, both grade level and subject matter contribute to within-school achievement variations. To determine whether achievement test scores vary most among different fields within a grade level, or within fields among…

  11. High Sensitivity MEMS Strain Sensor: Design and Simulation

    PubMed Central

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2008-01-01

    In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/με), high absolute resolution (1με) and low power consumption (100μA) with a maximum range of ±4000με has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50°C), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented.

  12. Mothers' Maximum Drinks Ever Consumed in 24 Hours Predicts Mental Health Problems in Adolescent Offspring

    ERIC Educational Resources Information Center

    Malone, Stephen M.; McGue, Matt; Iacono, William G.

    2010-01-01

    Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…

  13. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  14. All-fiber high-sensitivity pressure sensor with SiO2 diaphragm.

    PubMed

    Donlagic, Denis; Cibula, Edvard

    2005-08-15

    The design and fabrication of a miniature fiber Fabry-Perot pressure sensor with a diameter of 125 microm are presented. The essential element in the process is a thin SiO2 diaphragm that is fusion spliced at the hollow end of an optical fiber. Good repeatability and high sensitivity of the sensor are achieved by on-line tuning of the diaphragm thickness during the sensor fabrication process. Various sensor prototypes were fabricated, demonstrating pressure ranges of from 0 to 40 kPa to 0 to 1 MPa. The maximum achieved sensitivity was 1.1 rad/40 kPa at 1550 nm, and a pressure resolution of 300 Pa was demonstrated in practice. The presented design and fabrication technique offers a means of simple and low-cost disposable pressure sensor production. PMID:16127913

  15. Thermal lens microscope sensitivity enhancement using a passive Fabry–Perot-type optical cavity

    NASA Astrophysics Data System (ADS)

    Cabrera, H.; Cedeño, E.; Grima, P.; Marín, E.; Calderón, A.; Delgado, O.

    2016-05-01

    We developed a thermal lens microscope equipped with a passive optical cavity, which provides an optical feedback for the multiple pass of the probe laser beam to enhance sensitivity. Considering the maximum absorption peak for Fe(II) at 532 nm wavelength, we have achieved a 6.6-fold decrease in the limit of detection (LOD) to a level of 0.077 μg · l‑1 without a cavity. The possibilities to use thermal lens detection combined with an optical resonator was proposed and a drastic thermal lens signal enhancement was achieved using very low excitation power. The corresponding LOD for Fe(II) was further decreased to the level of 0.006 μg · l‑1 which represents an 85-fold decrease of the LOD value. This setup is a promising device, which can be applied as a sensitive tool for detecting chemical traces in small volumes of solutions.

  16. Uncertainty analysis for Probable Maximum Precipitation estimates

    NASA Astrophysics Data System (ADS)

    Micovic, Zoran; Schaefer, Melvin G.; Taylor, George H.

    2015-02-01

    An analysis of uncertainty associated with Probable Maximum Precipitation (PMP) estimates is presented. The focus of the study is firmly on PMP estimates derived through meteorological analyses and not on statistically derived PMPs. Theoretical PMP cannot be computed directly and operational PMP estimates are developed through a stepwise procedure using a significant degree of subjective professional judgment. This paper presents a methodology for portraying the uncertain nature of PMP estimation by analyzing individual steps within the PMP derivation procedure whereby for each parameter requiring judgment, a set of possible values is specified and accompanied by expected probabilities. The resulting range of possible PMP values can be compared with the previously derived operational single-value PMP, providing measures of the conservatism and variability of the original estimate. To our knowledge, this is the first uncertainty analysis conducted for a PMP derived through meteorological analyses. The methodology was tested on the La Joie Dam watershed in British Columbia. The results indicate that the commonly used single-value PMP estimate could be more than 40% higher when possible changes in various meteorological variables used to derive the PMP are considered. The findings of this study imply that PMP estimates should always be characterized as a range of values recognizing the significant uncertainties involved in PMP estimation. In fact, we do not know at this time whether precipitation is actually upper-bounded, and if precipitation is upper-bounded, how closely PMP estimates approach the theoretical limit.

  17. Maximum likelihood inference of reticulate evolutionary histories.

    PubMed

    Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay

    2014-11-18

    Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173

  18. CORA: Emission Line Fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  19. CORA - emission line fitting with Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  20. Maximum windmill efficiency in finite time

    NASA Astrophysics Data System (ADS)

    Huleihil, Mahmoud

    2009-05-01

    The fraction of the kinetic energy of the wind impinging on the rotor-swept area that a wind turbine can convert to useful power has been shown by Betz in an idealized laminar-flow model to have an upper limit of 16/27 or 59% approximately [I. H. Shames, Mechanics of Fluids, 2nd ed. (McGraw-Hill, New York, 1982), pp. A26-A31]. This figure is known as Betz number. Other studies [A. Rauh and W. Seelret, Appl. Energy 17, 15 (1984)] suggested that this figure should be considered as a guideline. In this paper, a new model is introduced and its efficiency at maximum power output is derived. The derived value is shown to be a function of the Betz number B and given by the formula ηmp=1-√1-B . This value is 36.2%, which agrees well with those of actually operating wind turbines. As a guideline, the wind turbine efficiency can be considered to be within the range of the two numbers of merit, the Betz number and ηmp.

  1. Theoretical Estimate of Maximum Possible Nuclear Explosion

    DOE R&D Accomplishments Database

    Bethe, H. A.

    1950-01-31

    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  2. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    SciTech Connect

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  3. PAML 4: phylogenetic analysis by maximum likelihood.

    PubMed

    Yang, Ziheng

    2007-08-01

    PAML, currently in version 4, is a package of programs for phylogenetic analyses of DNA and protein sequences using maximum likelihood (ML). The programs may be used to compare and test phylogenetic trees, but their main strengths lie in the rich repertoire of evolutionary models implemented, which can be used to estimate parameters in models of sequence evolution and to test interesting biological hypotheses. Uses of the programs include estimation of synonymous and nonsynonymous rates (d(N) and d(S)) between two protein-coding DNA sequences, inference of positive Darwinian selection through phylogenetic comparison of protein-coding genes, reconstruction of ancestral genes and proteins for molecular restoration studies of extinct life forms, combined analysis of heterogeneous data sets from multiple gene loci, and estimation of species divergence times incorporating uncertainties in fossil calibrations. This note discusses some of the major applications of the package, which includes example data sets to demonstrate their use. The package is written in ANSI C, and runs under Windows, Mac OSX, and UNIX systems. It is available at -- (http://abacus.gene.ucl.ac.uk/software/paml.html).

  4. Visual tracking by separability-maximum boosting

    NASA Astrophysics Data System (ADS)

    Hou, Jie; Mao, Yao-bin; Sun, Jin-sheng

    2013-10-01

    Recently, visual tracking has been formulated as a classification problem whose task is to detect the object from the scene with a binary classifier. Boosting based online feature selection methods, which adopt the classifier to appearance changes by choosing the most discriminative features, have been demonstrated to be effective for visual tracking. A major problem of such online feature selection methods is that an inaccurate classifier may give imprecise tracking windows. Tracking error accumulates when the tracker trains the classifier with misaligned samples and finally leads to drifting. Separability-maximum boosting (SMBoost), an alternative form of AdaBoost which characterizes the separability between the object and the scene by their means and covariance matrices, is proposed. SMBoost only needs the means and covariance matrices during training and can be easily adopted to online learning problems by estimating the statistics incrementally. Experiment on UCI machine learning datasets shows that SMBoost is as accurate as offline AdaBoost, and significantly outperforms Oza's online boosting. Accurate classifier stabilizes the tracker on challenging video sequences. Empirical results also demonstrate improvements in term of tracking precision and speed, comparing ours to those state-of-the-art ones.

  5. Approach trajectory planning system for maximum concealment

    NASA Technical Reports Server (NTRS)

    Warner, David N., Jr.

    1986-01-01

    A computer-simulation study was undertaken to investigate a maximum concealment guidance technique (pop-up maneuver), which military aircraft may use to capture a glide path from masked, low-altitude flight typical of terrain following/terrain avoidance flight enroute. The guidance system applied to this problem is the Fuel Conservative Guidance System. Previous studies using this system have concentrated on the saving of fuel in basically conventional land and ship-based operations. Because this system is based on energy-management concepts, it also has direct application to the pop-up approach which exploits aircraft performance. Although the algorithm was initially designed to reduce fuel consumption, the commanded deceleration is at its upper limit during the pop-up and, therefore, is a good approximation of a minimum-time solution. Using the model of a powered-lift aircraft, the results of the study demonstrated that guidance commands generated by the system are well within the capability of an automatic flight-control system. Results for several initial approach conditions are presented.

  6. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  7. Airframe structural optimization for maximum fatigue life

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Sareen, A. K.

    1990-01-01

    A methodology is outlined for optimization of airframe structures under dynamic constraints to maximize service life of specified fatigue-critical components. For practical airframe structures, this methodology describes the development of sensitivity analysis and computational procedures for constraints on the steady-state dynamic response displacements and stresses. Strain energy consideration is used for selection of structural members for modification. Development of a design model and its relation to an analysis model, as well as ways to reduce the dimensionality of the problem via approximation concepts, are described. This methodology is demonstrated using an elastic stick model for the MH-53J helicopter to show service life improvements of the hinge fold region.

  8. 46 CFR 154.556 - Cargo hose: Maximum working pressure.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Cargo hose: Maximum working pressure. 154.556 Section... Equipment Cargo Hose § 154.556 Cargo hose: Maximum working pressure. A cargo hose must have a maximum working pressure not less than the maximum pressure to which it may be subjected and at least 1034...

  9. 46 CFR 154.556 - Cargo hose: Maximum working pressure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Cargo hose: Maximum working pressure. 154.556 Section... Equipment Cargo Hose § 154.556 Cargo hose: Maximum working pressure. A cargo hose must have a maximum working pressure not less than the maximum pressure to which it may be subjected and at least 1034...

  10. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established...

  11. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established...

  12. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established...

  13. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established...

  14. 21 CFR 17.2 - Maximum penalty amounts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Maximum penalty amounts. 17.2 Section 17.2 Food... PENALTIES HEARINGS § 17.2 Maximum penalty amounts. The following table shows maximum civil monetary... Penalty Amounts U.S.C. Section Former Maximum Penalty Amount (in dollars) Assessment Method Date of...

  15. 20 CFR 211.14 - Maximum creditable compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...

  16. 24 CFR 941.306 - Maximum project cost.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Maximum project cost. 941.306... DEVELOPMENT PUBLIC HOUSING DEVELOPMENT Application and Proposal § 941.306 Maximum project cost. (a) Calculation of maximum project cost. The maximum project cost represents the total amount of public...

  17. 24 CFR 941.306 - Maximum project cost.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Maximum project cost. 941.306... DEVELOPMENT PUBLIC HOUSING DEVELOPMENT Application and Proposal § 941.306 Maximum project cost. (a) Calculation of maximum project cost. The maximum project cost represents the total amount of public...

  18. 49 CFR 230.24 - Maximum allowable stress.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the...

  19. 49 CFR 230.24 - Maximum allowable stress.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the...

  20. 49 CFR 230.24 - Maximum allowable stress.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the...

  1. 49 CFR 230.24 - Maximum allowable stress.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the...

  2. Gluten Sensitivity.

    PubMed

    Catassi, Carlo

    2015-01-01

    Non-celiac gluten sensitivity (NCGS) is a syndrome characterized by intestinal and extraintestinal symptoms related to the ingestion of gluten-containing food in subjects who are not affected by either celiac disease (CD) or wheat allergy (WA). The prevalence of NCGS is not clearly defined yet. Indirect evidence suggests that NCGS is slightly more common than CD, the latter affecting around 1% of the general population. NCGS has been mostly described in adults, particularly in females in the age group of 30-50 years; however, pediatric case series have also been reported. Since NCGS may be transient, gluten tolerance needs to be reassessed over time in patients with NCGS. NCGS is characterized by symptoms that usually occur soon after gluten ingestion, disappear with gluten withdrawal, and relapse following gluten challenge within hours/days. The 'classical' presentation of NCGS is a combination of irritable bowel syndrome-like symptoms, including abdominal pain, bloating, bowel habit abnormalities (either diarrhea or constipation), and systemic manifestations such as 'foggy mind', headache, fatigue, joint and muscle pain, leg or arm numbness, dermatitis (eczema or skin rash), depression, and anemia. In recent years, several studies explored the relationship between the ingestion of gluten-containing food and the appearance of neurological and psychiatric disorders/symptoms like ataxia, peripheral neuropathy, schizophrenia, autism, depression, anxiety, and hallucinations (so-called gluten psychosis). The diagnosis of NCGS should be considered in patients with persistent intestinal and/or extraintestinal complaints showing a normal result of the CD and WA serological markers on a gluten-containing diet, usually reporting worsening of symptoms after eating gluten-rich food. NCGS should not be an exclusion diagnosis only. Unfortunately, no biomarker is sensitive and specific enough for diagnostic purposes; therefore, the diagnosis of NCGS is currently based on

  3. Gluten Sensitivity.

    PubMed

    Catassi, Carlo

    2015-01-01

    Non-celiac gluten sensitivity (NCGS) is a syndrome characterized by intestinal and extraintestinal symptoms related to the ingestion of gluten-containing food in subjects who are not affected by either celiac disease (CD) or wheat allergy (WA). The prevalence of NCGS is not clearly defined yet. Indirect evidence suggests that NCGS is slightly more common than CD, the latter affecting around 1% of the general population. NCGS has been mostly described in adults, particularly in females in the age group of 30-50 years; however, pediatric case series have also been reported. Since NCGS may be transient, gluten tolerance needs to be reassessed over time in patients with NCGS. NCGS is characterized by symptoms that usually occur soon after gluten ingestion, disappear with gluten withdrawal, and relapse following gluten challenge within hours/days. The 'classical' presentation of NCGS is a combination of irritable bowel syndrome-like symptoms, including abdominal pain, bloating, bowel habit abnormalities (either diarrhea or constipation), and systemic manifestations such as 'foggy mind', headache, fatigue, joint and muscle pain, leg or arm numbness, dermatitis (eczema or skin rash), depression, and anemia. In recent years, several studies explored the relationship between the ingestion of gluten-containing food and the appearance of neurological and psychiatric disorders/symptoms like ataxia, peripheral neuropathy, schizophrenia, autism, depression, anxiety, and hallucinations (so-called gluten psychosis). The diagnosis of NCGS should be considered in patients with persistent intestinal and/or extraintestinal complaints showing a normal result of the CD and WA serological markers on a gluten-containing diet, usually reporting worsening of symptoms after eating gluten-rich food. NCGS should not be an exclusion diagnosis only. Unfortunately, no biomarker is sensitive and specific enough for diagnostic purposes; therefore, the diagnosis of NCGS is currently based on

  4. Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement

    ERIC Educational Resources Information Center

    Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina

    2015-01-01

    The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…

  5. The Impact of Reading Achievement on Overall Academic Achievement

    ERIC Educational Resources Information Center

    Churchwell, Dawn Earheart

    2009-01-01

    This study examined the relationship between reading achievement and achievement in other subject areas. The purpose of this study was to determine if there was a correlation between reading scores as measured by the Standardized Test for the Assessment of Reading (STAR) and academic achievement in language arts, math, science, and social studies…

  6. Highly sensitive multi-core flat fiber surface plasmon resonance refractive index sensor.

    PubMed

    Rifat, Ahmmed A; Mahdiraji, G A; Sua, Yong Meng; Ahmed, Rajib; Shee, Y G; Adikan, F R Mahamd

    2016-02-01

    A simple multi-core flat fiber (MCFF) based surface plasmon resonance (SPR) sensor operating in telecommunication wavelengths is proposed for refractive index sensing. Chemically stable gold (Au) and titanium dioxide (TiO(2)) layers are used outside the fiber structure to realize a simple detection mechanism. The modeled sensor shows average wavelength interrogation sensitivity of 9,600 nm/RIU (Refractive Index Unit) and maximum sensitivity of 23,000 nm/RIU in the sensing range of 1.46-1.485 and 1.47-1.475, respectively. Moreover, the refractive index resolution of 4.35 × 10(-6) is demonstrated. Additionally, proposed sensor had shown the maximum amplitude interrogation sensitivity of 820 RIU(-1), with the sensor resolution of 1.22 × 10(-5) RIU. To the best of our knowledge, the proposed sensor achieved the highest wavelength interrogation sensitivity among the reported fiber based SPR sensors. Finally we anticipate that, this novel and highly sensitive MCFF SPR sensor will find the potential applications in real time remote sensing and monitoring, ultimately enabling inexpensive and accurate chemical and biochemical analytes detection. PMID:26906823

  7. Highly sensitive multi-core flat fiber surface plasmon resonance refractive index sensor.

    PubMed

    Rifat, Ahmmed A; Mahdiraji, G A; Sua, Yong Meng; Ahmed, Rajib; Shee, Y G; Adikan, F R Mahamd

    2016-02-01

    A simple multi-core flat fiber (MCFF) based surface plasmon resonance (SPR) sensor operating in telecommunication wavelengths is proposed for refractive index sensing. Chemically stable gold (Au) and titanium dioxide (TiO(2)) layers are used outside the fiber structure to realize a simple detection mechanism. The modeled sensor shows average wavelength interrogation sensitivity of 9,600 nm/RIU (Refractive Index Unit) and maximum sensitivity of 23,000 nm/RIU in the sensing range of 1.46-1.485 and 1.47-1.475, respectively. Moreover, the refractive index resolution of 4.35 × 10(-6) is demonstrated. Additionally, proposed sensor had shown the maximum amplitude interrogation sensitivity of 820 RIU(-1), with the sensor resolution of 1.22 × 10(-5) RIU. To the best of our knowledge, the proposed sensor achieved the highest wavelength interrogation sensitivity among the reported fiber based SPR sensors. Finally we anticipate that, this novel and highly sensitive MCFF SPR sensor will find the potential applications in real time remote sensing and monitoring, ultimately enabling inexpensive and accurate chemical and biochemical analytes detection.

  8. On the Achievable Throughput Over TVWS Sensor Networks.

    PubMed

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  9. On the Achievable Throughput Over TVWS Sensor Networks.

    PubMed

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-03-30

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis.

  10. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  11. The maximum efficiency of nano heat engines depends on more than temperature

    NASA Astrophysics Data System (ADS)

    Woods, Mischa; Ng, Nelly; Wehner, Stephanie

    Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.

  12. Paleofield of early space, maximum estimates

    NASA Astrophysics Data System (ADS)

    Kletetschka, G.; Wasilewski, P. J.

    2001-12-01

    Magnetic records in meteorites have been used by many to estimate paleofield in early history of our solar system. Modified Thellier-Thellier analyses provided paleofield intensities in broad range of values (10,000-200,000nT). However, in meteorites we can never assume that the NRM is a TRM. In fact any meteorite with plessite or grains with the "M" shaped diffusion profiles will contain, anisotropic and interacting mineralogies for which we do not have a physical model. The carbonaceous chondrites that are likely to have their useful NRM's associated with hydrothermal events against a thermochemical remanence are very difficult to deal with since even if a non-thermal paleofield is applied there is no calibration basis. We are essentially addressing these issues since the ultimate goal of meteorite studies is to provide confident estimates of early solar system magnetic fields. Modes of remanence other than thermal are considered to drive the estimated paleofield values even higher and thus the paleofield value estimate commonly serves as a minimum estimate. Using the strict assumptions in the Thellier-Thellier method and considering the presence of multiple kinds of remanence we show that the estimated paleofield values are not minima but maxima. In the key equation for paleofield estimates: Mt/Hu=Mtlab/Hlab, Mt is natural remanence of thermal origin and is contained within natural remanent magnetization Mnrm that contains additional modes of remanence (M1+.+Mn), Hu is an unknown paleofield intensity, Mtlab is a thermoremanence acquired in laboratory field Hlab. Thus Mt=Mnrm-(M1+.+Mn). Therefore the unknown field has a form: Hu=(Mnrm- (M1+.+Mn)) Hlab/Mtlab. This equation clearly shows that if none of the remanence is thermal, the Hu approaches zero. Thus the estimated values of paleofield that are derived using the Thellier-Thellier approach are not minimum but maximum estimates.

  13. Maximum likelihood molecular clock comb: analytic solutions.

    PubMed

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  14. Photopolymer material sensitized by xanthene dyes for holographic recording using forbidden singlet–triplet electronic transitions

    NASA Astrophysics Data System (ADS)

    Shelkovnikov, Vladimir; Vasiljev, Evgeny; Russkih, Vladimlen; Berezhnaya, Viktoria

    2016-07-01

    A new holographic photopolymer material is developed. The photopolymer material is sensitized by dyes of xanthene and thioxanthene series which contain iodine and bromine heavy atoms. Holographic recording was carried out during excitation of forbidden singlet–triplet electron transitions of dyes. Thioerythrosin triethylammonium was identified as the most effective sensitizer among a number of tested dyes. The spectral absorption area of the singlet–triplet electronic transition of the dye is conveyed in the red spectral range from 600 to 700 nm. The sensitivity of the photopolymer material to radiation with 633 nm wavelength is 180 mJ cm‑2. Optimization of concentration of the main components of the photopolymer compositions was carried out in order to achieve maximum efficiency of holographic recording.

  15. Cherokee Culture and School Achievement.

    ERIC Educational Resources Information Center

    Brown, Anthony D.

    1980-01-01

    Compares the effect of cooperative and competitive behaviors of Cherokee and Anglo American elementary school students on academic achievement. Suggests changes in teaching techniques and lesson organization that might raise academic achievement while taking into consideration tribal traditions that limit scholastic achievement in an…

  16. Tuberculin sensitivity.

    PubMed

    Eason, R J

    1987-06-01

    A prospective study of tuberculin sensitivity has been conducted among 3610 subjects under 20 years old in the Solomon Islands, Western Province. Mantoux positivity (greater than or equal to mm induration after 5 TU) fell from 81% during the 6 months following birth BCG vaccination to 13% for children aged 1-8 years of age among whom it was not significantly higher than the rate of 9% noted for unvaccinated subjects. Birth BCG does not, therefore, hinder the diagnostic usefulness of tuberculin testing for such children. For the study population as a whole, BCG-induced Mantoux positivity was restricted to induration under 15 mm diameter. Stronger responses were considered specific for tuberculin infection and indicated a prevalence rate that rose from 2% to 16% with age. Accelerated BCG reactions recorded among 45% of 162 tuberculin non-reactors under 8 years old indicated that the waning of tuberculin responsiveness at this time could not be equated with loss of clinical protection against tuberculosis. PMID:2441657

  17. Maximum entropy models of ecosystem functioning

    NASA Astrophysics Data System (ADS)

    Bertram, Jason

    2014-12-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  18. Maximum entropy models of ecosystem functioning

    SciTech Connect

    Bertram, Jason

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  19. Ring cavity fiber laser based on Fabry-Pérot interferometer for high-sensitive micro-displacement sensing

    NASA Astrophysics Data System (ADS)

    Bai, Yan; Yan, Feng-ping; Liu, Shuo; Tan, Si-yu; Wen, Xiao-dong

    2015-11-01

    A ring cavity fiber laser based on Fabry-Pérot interferometer (FPI) is proposed and demonstrated experimentally for micro-displacement sensing. Simulation results show that the dips of the FPI transmission spectrum are sensitive to the cavity length of the FPI. With this characteristic, the relationship between wavelength shift and cavity length change can be established by means of the FPI with two aligned fiber end tips. The maximum sensitivity of 39.6 nm/μm is achieved experimentally, which is approximately 25 times higher than those in previous reports. The corresponding ring cavity fiber laser with the sensitivity for displacement measurement of about 6 nm/μm is implemented by applying the FPI as the filter. The proposed fiber laser has the advantages of simple structure, low cost and high sensitivity.

  20. Gauging the Nearness and Size of Cycle Maximum

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2003-01-01

    A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.

  1. Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement

    PubMed Central

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836

  2. High-Sensitivity Microwave Optics.

    ERIC Educational Resources Information Center

    Nunn, W. M., Jr.

    1981-01-01

    Describes a 3.33-cm wavelength (9 GHz) microwave system that achieves a high overall signal sensitivity and a well-collimated beam with moderate-size equipment. The system has been used to develop microwave versions of the Michelson interferometer, Bragg reflector, Brewster's law and total internal reflection, and Young's interference experiment.…

  3. Density sensitive hashing.

    PubMed

    Jin, Zhongming; Li, Cheng; Lin, Yue; Cai, Deng

    2014-08-01

    Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

  4. Sensitivity of locally naturalized Panicum species to HPPD- and ALS-inhibiting herbicides in maize.

    PubMed

    De Cauwer, B; Geeroms, T; Claerhout, S; Reheul, D; Bulcke, R

    2012-01-01

    Until recently the Panicum species Panicum schinzii Hack. (Transvaal millet), Panicum dichotomiflorum Michx. (Fall panicum) and Panicum capillare L. (Witchgrass) were completely overlooked in Belgium. Since 1970, these species have gradually spread and are now locally naturalized and abundant in and along maize fields. One of the possible raisons for their expansion in maize fields might be a lower sensitivity to postemergence herbicides acting against panicoid grasses, in particular those inhibiting 4-hydroxyphenyl pyruvate dioxygenase (HPPD) and acetolactate synthase (ALS). A dose-response pot experiment was conducted in the greenhouse to evaluate the effectiveness of five HPPD-inhibiting herbicides (sulcotrione, mesotrione, isoxaflutole, topramezone, tembotrione) and two ALS-inhibiting herbicides (nicosulfuron, foramsulfuron) for controlling Belgian populations of P. schinzii, P. dichotomiflorum and P. capillare. Shortly after sowing, half of all pots were covered with a film of activated charcoal to evaluate foliar activity of the applied herbicides. In another dose-response pot experiment, sensitivity of five local P. dichotomiflorum populations to HPPD-inhibitors and nicosulfuron was investigated. Finally, the influence of leaf stage at time of herbicide application on efficacy of topramezone and nicosulfuron for Panicum control was evaluated. Large interspecific differences in sensitivity to HPPD-inhibiting herbicides were observed. Panicum schinzii was sensitive (i.e., required a dose lower than the maximum authorized field dose to achieve 90% reduction in biomass) to tembotrione but moderately sensitive (i.e. required maximum field dose) to topramezone and poorly sensitive (i.e. required three-fold higher dose than maximum field dose) to mesotrione and sulcotrione. However, P. dichotomiflorum, a species that morphologically closely resembles P. schinzii, was sensitive to mesotrione and topramezone but moderately sensitive to tembotrione. All Panicum

  5. Pattern formation, logistics, and maximum path probability

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  6. Achievement as Resistance: The Development of a Critical Race Achievement Ideology among Black Achievers

    ERIC Educational Resources Information Center

    Carter, Dorinda J.

    2008-01-01

    In this article, Dorinda Carter examines the embodiment of a critical race achievement ideology in high-achieving black students. She conducted a yearlong qualitative investigation of the adaptive behaviors that nine high-achieving black students developed and employed to navigate the process of schooling at an upper-class, predominantly white,…

  7. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...

  8. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...

  9. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...

  10. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE...

  11. Optimizing the configuration of a superconducting photonic band gap accelerator cavity to increase the maximum achievable gradients

    NASA Astrophysics Data System (ADS)

    Simakov, Evgenya I.; Kurennoy, Sergey S.; O'Hara, James F.; Olivas, Eric R.; Shchegolkov, Dmitry Yu.

    2014-02-01

    We present a design of a superconducting rf photonic band gap (SRF PBG) accelerator cell with specially shaped rods in order to reduce peak surface magnetic fields and improve the effectiveness of the PBG structure for suppression of higher order modes (HOMs). The ability of PBG structures to suppress long-range wakefields is especially beneficial for superconducting electron accelerators for high power free-electron lasers (FELs), which are designed to provide high current continuous duty electron beams. Using PBG structures to reduce the prominent beam-breakup phenomena due to HOMs will allow significantly increased beam-breakup thresholds. As a result, there will be possibilities for increasing the operation frequency of SRF accelerators and for the development of novel compact high-current accelerator modules for the FELs.

  12. CDF's Higgs sensitivity status

    SciTech Connect

    Junk, Tom; /Illinois U., Urbana

    2005-10-01

    The combined sensitivity of CDF's current Standard Model Higgs boson searches is presented. The expected 95% CL limits on the production cross section times the relevant Higgs boson branching ratios are computed for the W{sup {+-}}H {yields} {ell}{sup {+-}}{nu}b{bar b}, ZH {yields} {nu}{bar {nu}}b{bar b}, gg {yields} H {yields} W{sup +}W{sup -} W{sup {+-}}H {yields} W{sup {+-}}W{sup +}W{sup -} channels as they stand as of the October 2005, using results which were prepared for Summer 2005 conferences and a newer result form the gg {yields} H {yields} W{sup +}W{sup -} channel. Correlated and uncorrelated systematic uncertainties are taken into account, and the luminosity requirements for 95% CL exclusion, 3{sigma} evidence, and 5{sigma} discovery are computed for median experimental outcomes. A list of improvements required to achieve the sensitivity to a SM Higgs boson as quantified in the Higgs Sensitivity Working Group's report is provided.

  13. Multithreaded Algorithms for Maximum Matching in Bipartite Graphs

    SciTech Connect

    Azad, Md Ariful; Halappanavar, Mahantesh; Rajamanickam, Siva; Boman, Erik G.; Khan, Arif; Pothen, Alex

    2012-05-31

    Abstract—We design, implement, and evaluate algorithms for computing a matching of maximum cardinality in a bipartite graph on multi-core and massively multithreaded computers. As computers with larger number of slower cores dominate the commodity processor market, the design of multithreaded algorithms to solve large matching problems becomes a necessity. Recent work on serial algorithms based on searching for augmenting paths for this problem have shown that their performance is sensitive to the order in which the vertices are processed for matching. In a multithreaded environment, imposing a serial order in which vertices are considered for matching would lead to loss of concurrency and performance. But this raises the question: Would parallel matching algorithms on multithreaded machines improve performance over a serial algorithm? We answer this question in the affirmative. We report efficient multithreaded implementations of two key algorithms (Hopcroft- Karp based on breadth-first-search, and Pothen-Fan based on depth-first-search) and their variants, combined with the Karp- Sipser initialization algorithm. We report extensive results and insights using three shared-memory platforms (a 48-core AMD Opteron, a 32-core Intel Nehalem, and a 128-processor Cray XMT) on a representative set of real-world and synthetic graphs. To the best of our knowledge, this is the first extensive study of augmentation-based parallel algorithms for bipartite cardinality matching.

  14. Continental warming preceding the Palaeocene-Eocene thermal maximum.

    PubMed

    Secord, Ross; Gingerich, Philip D; Lohmann, Kyger C; Macleod, Kenneth G

    2010-10-21

    Marine and continental records show an abrupt negative shift in carbon isotope values at ∼55.8 Myr ago. This carbon isotope excursion (CIE) is consistent with the release of a massive amount of isotopically light carbon into the atmosphere and was associated with a dramatic rise in global temperatures termed the Palaeocene-Eocene thermal maximum (PETM). Greenhouse gases released during the CIE, probably including methane, have often been considered the main cause of PETM warming. However, some evidence from the marine record suggests that warming directly preceded the CIE, raising the possibility that the CIE and PETM may have been linked to earlier warming with different origins. Yet pre-CIE warming is still uncertain. Disentangling the sequence of events before and during the CIE and PETM is important for understanding the causes of, and Earth system responses to, abrupt climate change. Here we show that continental warming of about 5 °C preceded the CIE in the Bighorn Basin, Wyoming. Our evidence, based on oxygen isotopes in mammal teeth (which reflect temperature-sensitive fractionation processes) and other proxies, reveals a marked temperature increase directly below the CIE, and again in the CIE. Pre-CIE warming is also supported by a negative amplification of δ(13)C values in soil carbonates below the CIE. Our results suggest that at least two sources of warming-the earlier of which is unlikely to have been methane-contributed to the PETM.

  15. How Community College African American Students with or without a Father or Male Surrogate Presence at Home Develop Their Personal Identity, Academic Self-Concept, Race Theory, Social Sensitivity, Resiliency, and Vision of Their Own Success and the Influence on Their Academic Achievement

    ERIC Educational Resources Information Center

    Holliday, A'lon Michael

    2011-01-01

    Despite the growing body of research on African American students' academic achievement and the role mothers play in their child's academic development, few studies (Carter, 2008; Fordham, 1988) examined the role fathers play in the development of their child's academic achievement. The primary aim of this study was to examine how…

  16. The Mechanics of Human Achievement

    PubMed Central

    Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.

    2015-01-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers. PMID:26236393

  17. Mathematics Achievement in High- and Low-Achieving Secondary Schools

    ERIC Educational Resources Information Center

    Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza

    2015-01-01

    This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…

  18. Investigation of Maximum Power Point Tracking for Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Phillip, Navneesh; Maganga, Othman; Burnham, Keith J.; Ellis, Mark A.; Robinson, Simon; Dunn, Julian; Rouaud, Cedric

    2013-07-01

    In this paper, a thermoelectric generator (TEG) model is developed as a tool for investigating optimized maximum power point tracking (MPPT) algorithms for TEG systems within automotive exhaust heat energy recovery applications. The model comprises three main subsystems that make up the TEG system: the heat exchanger, thermoelectric material, and power conditioning unit (PCU). In this study, two MPPT algorithms known as the perturb and observe (P&O) algorithm and extremum seeking control (ESC) are investigated. A synchronous buck-boost converter is implemented as the preferred DC-DC converter topology, and together with the MPPT algorithm completes the PCU architecture. The process of developing the subsystems is discussed, and the advantage of using the MPPT controller is demonstrated. The simulation results demonstrate that the ESC algorithm implemented in combination with a synchronous buck-boost converter achieves favorable power outputs for TEG systems. The appropriateness is by virtue of greater responsiveness to changes in the system's thermal conditions and hence the electrical potential difference generated in comparison with the P&O algorithm. The MATLAB/Simulink environment is used for simulation of the TEG system and comparison of the investigated control strategies.

  19. Skeleton Graph Matching vs. Maximum Weight Cliques aorta registration techniques.

    PubMed

    Czajkowska, Joanna; Feinen, C; Grzegorzek, M; Raspe, M; Wickenhöfer, R

    2015-12-01

    Vascular diseases are one of the most challenging health problems in developed countries. Past as well as ongoing research activities often focus on efficient, robust and fast aorta segmentation, and registration techniques. According to this needs our study targets an abdominal aorta registration method. The investigated algorithms make it possible to efficiently segment and register abdominal aorta in pre- and post-operative Computed Tomography (CT) data. In more detail, a registration technique using the Path Similarity Skeleton Graph Matching (PSSGM), as well as Maximum Weight Cliques (MWCs) are employed to realise the matching based on Computed Tomography data. The presented approaches make it possible to match characteristic voxels belonging to the aorta from different Computed Tomography (CT) series. It is particularly useful in the assessment of the abdominal aortic aneurysm treatment by visualising the correspondence between the pre- and post-operative CT data. The registration results have been tested on the database of 18 contrast-enhanced CT series, where the cross-registration analysis has been performed producing 153 matching examples. All the registration results achieved with our system have been verified by an expert. The carried out analysis has highlighted the advantage of the MWCs technique over the PSSGM method. The verification phase proves the efficiency of the MWCs approach and encourages to further develop this methods.

  20. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  1. Enhancing quantum sensing sensitivity by a quantum memory

    PubMed Central

    Zaiser, Sebastian; Rendler, Torsten; Jakobi, Ingmar; Wolf, Thomas; Lee, Sang-Yun; Wagner, Samuel; Bergholm, Ville; Schulte-Herbrüggen, Thomas; Neumann, Philipp; Wrachtrup, Jörg

    2016-01-01

    In quantum sensing, precision is typically limited by the maximum time interval over which phase can be accumulated. Memories have been used to enhance this time interval beyond the coherence lifetime and thus gain precision. Here, we demonstrate that by using a quantum memory an increased sensitivity can also be achieved. To this end, we use entanglement in a hybrid spin system comprising a sensing and a memory qubit associated with a single nitrogen-vacancy centre in diamond. With the memory we retain the full quantum state even after coherence decay of the sensor, which enables coherent interaction with distinct weakly coupled nuclear spin qubits. We benchmark the performance of our hybrid quantum system against use of the sensing qubit alone by gradually increasing the entanglement of sensor and memory. We further apply this quantum sensor-memory pair for high-resolution NMR spectroscopy of single 13C nuclear spins. PMID:27506596

  2. Resistor-less charge sensitive amplifier for semiconductor detectors

    NASA Astrophysics Data System (ADS)

    Pelczar, K.; Panas, K.; Zuzel, G.

    2016-11-01

    A new concept of a Charge Sensitive Amplifier without a high-value resistor in the feedback loop is presented. Basic spectroscopic parameters of the amplifier coupled to a coaxial High Purity Germanium detector (HPGe) are discussed. The amplifier signal input is realized with an n-channel J-FET transistor. The feedback capacitor is discharged continuously by the second, forward biased n-channel J-FET, driven by an RC low-pass filter. Both the analog-with a standard spectroscopy amplifier and a multi-channel analyzer-and the digital-by applying a Flash Analog to Digital Converter-signal readouts were tested. The achieved resolution in the analog and the digital readouts was 0.17% and 0.21%, respectively, at the Full Width at Half Maximum of the registered 60Co 1332.5 keV gamma line.

  3. Enhancing quantum sensing sensitivity by a quantum memory

    NASA Astrophysics Data System (ADS)

    Zaiser, Sebastian; Rendler, Torsten; Jakobi, Ingmar; Wolf, Thomas; Lee, Sang-Yun; Wagner, Samuel; Bergholm, Ville; Schulte-Herbrüggen, Thomas; Neumann, Philipp; Wrachtrup, Jörg

    2016-08-01

    In quantum sensing, precision is typically limited by the maximum time interval over which phase can be accumulated. Memories have been used to enhance this time interval beyond the coherence lifetime and thus gain precision. Here, we demonstrate that by using a quantum memory an increased sensitivity can also be achieved. To this end, we use entanglement in a hybrid spin system comprising a sensing and a memory qubit associated with a single nitrogen-vacancy centre in diamond. With the memory we retain the full quantum state even after coherence decay of the sensor, which enables coherent interaction with distinct weakly coupled nuclear spin qubits. We benchmark the performance of our hybrid quantum system against use of the sensing qubit alone by gradually increasing the entanglement of sensor and memory. We further apply this quantum sensor-memory pair for high-resolution NMR spectroscopy of single 13C nuclear spins.

  4. Maximum Power Training and Plyometrics for Cross-Country Running.

    ERIC Educational Resources Information Center

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  5. 14 CFR 27.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Maximum operating altitude. 27.1527 Section 27.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... § 27.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as...

  6. 14 CFR 29.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Maximum operating altitude. 29.1527 Section 29.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 29.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed,...

  7. 14 CFR 29.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Maximum operating altitude. 29.1527 Section 29.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 29.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed,...

  8. 14 CFR 27.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Maximum operating altitude. 27.1527 Section 27.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... § 27.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as...

  9. 14 CFR 29.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Maximum operating altitude. 29.1527 Section 29.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 29.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed,...

  10. 14 CFR 27.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Maximum operating altitude. 27.1527 Section 27.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... § 27.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as...

  11. 14 CFR 29.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum operating altitude. 29.1527 Section 29.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 29.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed,...

  12. 14 CFR 27.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum operating altitude. 27.1527 Section 27.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... § 27.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as...

  13. 76 FR 71554 - Civil Penalties; Notice of Adjusted Maximum Amounts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-18

    ... COMMISSION Civil Penalties; Notice of Adjusted Maximum Amounts AGENCY: Consumer Product Safety Commission. ACTION: Notice of adjusted maximum civil penalty amounts. SUMMARY: In 1990, Congress enacted statutory amendments that provided for periodic adjustments to the maximum civil penalty amounts authorized under...

  14. 40 CFR 94.107 - Determination of maximum test speed.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  15. 14 CFR 25.1505 - Maximum operating limit speed.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  16. 14 CFR 25.1505 - Maximum operating limit speed.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  17. 40 CFR 94.107 - Determination of maximum test speed.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Determination of maximum test speed. 94... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  18. 14 CFR 25.1505 - Maximum operating limit speed.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  19. 40 CFR 94.107 - Determination of maximum test speed.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  20. 40 CFR 94.107 - Determination of maximum test speed.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  1. 14 CFR 25.1505 - Maximum operating limit speed.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  2. 40 CFR 94.107 - Determination of maximum test speed.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...

  3. 14 CFR 25.1505 - Maximum operating limit speed.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...

  4. 31 CFR 149.3 - Maximum obligation limitation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance: Treasury 1 2013-07-01 2013-07-01 false Maximum obligation limitation. 149.3 Section 149.3 Money and Finance: Treasury Regulations Relating to Money and Finance MONETARY OFFICES, DEPARTMENT OF THE TREASURY CALCULATION OF MAXIMUM OBLIGATION LIMITATION § 149.3 Maximum obligation...

  5. 13 CFR 107.840 - Maximum term of Financing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of...

  6. 7 CFR 4290.840 - Maximum term of Financing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Maximum term of Financing. 4290.840 Section 4290.840... Financing of Enterprises by RBICs Structuring Rbic Financing of Eligible Enterprises-Types of Financings § 4290.840 Maximum term of Financing. The maximum term of any Debt Security must be no longer than...

  7. Maximum principles for second order dynamic equations on time scales

    NASA Astrophysics Data System (ADS)

    Stehlik, Petr; Thompson, Bevan

    2007-07-01

    This paper establishes some new maximum principles for second order dynamic equations on time scales, including: a strong maximum principle; a generalized maximum principle; and a boundary point lemma. The new results include, as special cases, well-known ideas for ordinary differential equations and difference equations.

  8. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  9. 16 CFR 1505.8 - Maximum acceptable material temperatures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Maximum acceptable material temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.8 Maximum acceptable material temperatures. The maximum acceptable material temperatures for electrically operated toys shall be as follows (Classes...

  10. 16 CFR 1505.7 - Maximum acceptable surface temperatures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Maximum acceptable surface temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.7 Maximum acceptable surface temperatures. The maximum acceptable surface temperatures for electrically operated toys shall be as follows: Surface...

  11. 16 CFR 1505.8 - Maximum acceptable material temperatures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Maximum acceptable material temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.8 Maximum acceptable material temperatures. The maximum acceptable material temperatures for electrically operated toys shall be as follows (Classes...

  12. 16 CFR 1505.7 - Maximum acceptable surface temperatures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Maximum acceptable surface temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.7 Maximum acceptable surface temperatures. The maximum acceptable surface temperatures for electrically operated toys shall be as follows: Surface...

  13. 16 CFR 1505.8 - Maximum acceptable material temperatures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Maximum acceptable material temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.8 Maximum acceptable material temperatures. The maximum acceptable material temperatures for electrically operated toys shall be as follows (Classes...

  14. 16 CFR 1505.8 - Maximum acceptable material temperatures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Maximum acceptable material temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.8 Maximum acceptable material temperatures. The maximum acceptable material temperatures for electrically operated toys shall be as follows (Classes...

  15. 16 CFR 1505.7 - Maximum acceptable surface temperatures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Maximum acceptable surface temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.7 Maximum acceptable surface temperatures. The maximum acceptable surface temperatures for electrically operated toys shall be as follows: Surface...

  16. 16 CFR 1505.7 - Maximum acceptable surface temperatures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Maximum acceptable surface temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.7 Maximum acceptable surface temperatures. The maximum acceptable surface temperatures for electrically operated toys shall be as follows: Surface...

  17. 16 CFR 1505.8 - Maximum acceptable material temperatures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Maximum acceptable material temperatures... ARTICLES INTENDED FOR USE BY CHILDREN Regulations § 1505.8 Maximum acceptable material temperatures. The maximum acceptable material temperatures for electrically operated toys shall be as follows (Classes...

  18. 20 CFR 617.14 - Maximum amount of TRA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount...

  19. 20 CFR 617.14 - Maximum amount of TRA.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount...

  20. 20 CFR 617.14 - Maximum amount of TRA.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount...

  1. 20 CFR 617.14 - Maximum amount of TRA.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount...

  2. 20 CFR 617.14 - Maximum amount of TRA.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount...

  3. 33 CFR 183.35 - Maximum weight capacity: Outboard boats.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... difference between its maximum displacement and boat weight. (b) For the purposes of paragraph (a) of this section: (1) “Maximum displacement” is the weight of the volume of water displaced by the boat at its maximum level immersion in calm water without water coming aboard except for water coming through...

  4. 33 CFR 183.35 - Maximum weight capacity: Outboard boats.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... difference between its maximum displacement and boat weight. (b) For the purposes of paragraph (a) of this section: (1) “Maximum displacement” is the weight of the volume of water displaced by the boat at its maximum level immersion in calm water without water coming aboard except for water coming through...

  5. 33 CFR 183.35 - Maximum weight capacity: Outboard boats.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... difference between its maximum displacement and boat weight. (b) For the purposes of paragraph (a) of this section: (1) “Maximum displacement” is the weight of the volume of water displaced by the boat at its maximum level immersion in calm water without water coming aboard except for water coming through...

  6. 33 CFR 183.35 - Maximum weight capacity: Outboard boats.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... difference between its maximum displacement and boat weight. (b) For the purposes of paragraph (a) of this section: (1) “Maximum displacement” is the weight of the volume of water displaced by the boat at its maximum level immersion in calm water without water coming aboard except for water coming through...

  7. 33 CFR 183.35 - Maximum weight capacity: Outboard boats.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... difference between its maximum displacement and boat weight. (b) For the purposes of paragraph (a) of this section: (1) “Maximum displacement” is the weight of the volume of water displaced by the boat at its maximum level immersion in calm water without water coming aboard except for water coming through...

  8. 49 CFR 230.27 - Maximum shearing strength of rivets.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Strength of Materials § 230.27 Maximum shearing strength of rivets. The maximum shearing strength... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum shearing strength of rivets....

  9. 49 CFR 230.27 - Maximum shearing strength of rivets.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Strength of Materials § 230.27 Maximum shearing strength of rivets. The maximum shearing strength... 49 Transportation 4 2011-10-01 2011-10-01 false Maximum shearing strength of rivets....

  10. 49 CFR 230.27 - Maximum shearing strength of rivets.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Strength of Materials § 230.27 Maximum shearing strength of rivets. The maximum shearing strength... 49 Transportation 4 2012-10-01 2012-10-01 false Maximum shearing strength of rivets....

  11. 49 CFR 230.27 - Maximum shearing strength of rivets.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Strength of Materials § 230.27 Maximum shearing strength of rivets. The maximum shearing strength... 49 Transportation 4 2013-10-01 2013-10-01 false Maximum shearing strength of rivets....

  12. 49 CFR 230.27 - Maximum shearing strength of rivets.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Strength of Materials § 230.27 Maximum shearing strength of rivets. The maximum shearing strength... 49 Transportation 4 2014-10-01 2014-10-01 false Maximum shearing strength of rivets....

  13. 30 CFR 56.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Maximum acceleration and deceleration. 56.19062... Hoisting Hoisting Procedures § 56.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  14. 30 CFR 57.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Maximum acceleration and deceleration. 57.19062... Hoisting Hoisting Procedures § 57.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  15. 30 CFR 57.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Maximum acceleration and deceleration. 57.19062... Hoisting Hoisting Procedures § 57.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  16. 30 CFR 57.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum acceleration and deceleration. 57.19062... Hoisting Hoisting Procedures § 57.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  17. 30 CFR 56.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Maximum acceleration and deceleration. 56.19062... Hoisting Hoisting Procedures § 56.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  18. 30 CFR 56.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Maximum acceleration and deceleration. 56.19062... Hoisting Hoisting Procedures § 56.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  19. 30 CFR 57.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Maximum acceleration and deceleration. 57.19062... Hoisting Hoisting Procedures § 57.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...

  20. 30 CFR 56.19062 - Maximum acceleration and deceleration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Maximum acceleration and deceleration. 56.19062... Hoisting Hoisting Procedures § 56.19062 Maximum acceleration and deceleration. Maximum normal operating acceleration and deceleration shall not exceed 6 feet per second per second. During emergency braking,...