Sample records for process requiring constant

  1. A Change for Chemistry

    ERIC Educational Resources Information Center

    Christian, Brittany; Yezierski, Ellen

    2012-01-01

    Science is always changing. Its very nature requires that scientists constantly revise theories to make sense of new observations. As they learn science, students are also constantly revising how they make sense of their observations, which requires comparisons with what they already know to process new information. A teacher can take advantage of…

  2. Mathematical Analysis of High-Temperature Co-electrolysis of CO2 and O2 Production in a Closed-Loop Atmosphere Revitalization System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael G. McKellar; Manohar S. Sohal; Lila Mulloth

    2010-03-01

    NASA has been evaluating two closed-loop atmosphere revitalization architectures based on Sabatier and Bosch carbon dioxide, CO2, reduction technologies. The CO2 and steam, H2O, co-electrolysis process is another option that NASA has investigated. Utilizing recent advances in the fuel cell technology sector, the Idaho National Laboratory, INL, has developed a CO2 and H2O co-electrolysis process to produce oxygen and syngas (carbon monoxide, CO and hydrogen, H2 mixture) for terrestrial (energy production) application. The technology is a combined process that involves steam electrolysis, CO2 electrolysis, and the reverse water gas shift (RWGS) reaction. A number of process models have been developedmore » and analyzed to determine the theoretical power required to recover oxygen, O2, in each case. These models include the current Sabatier and Bosch technologies and combinations of those processes with high-temperature co-electrolysis. The cases of constant CO2 supply and constant O2 production were evaluated. In addition, a process model of the hydrogenation process with co-electrolysis was developed and compared. Sabatier processes require the least amount of energy input per kg of oxygen produced. If co-electrolysis replaces solid polymer electrolyte (SPE) electrolysis within the Sabatier architecture, the power requirement is reduced by over 10%, but only if heat recuperation is used. Sabatier processes, however, require external water to achieve the lower power results. Under conditions of constant incoming carbon dioxide flow, the Sabatier architectures require more power than the other architectures. The Bosch, Boudouard with co-electrolysis, and the hydrogenation with co-electrolysis processes require little or no external water. The Bosch and hydrogenation processes produce water within their reactors, which aids in reducing the power requirement for electrolysis. The Boudouard with co-electrolysis process has a higher electrolysis power requirement because carbon dioxide is split instead of water, which has a lower heat of formation. Hydrogenation with co-electrolysis offers the best overall power performance for two reasons: it requires no external water, and it produces its own water, which reduces the power requirement for co-electrolysis.« less

  3. Developing Software Requirements for a Knowledge Management System That Coordinates Training Programs with Business Processes and Policies in Large Organizations

    ERIC Educational Resources Information Center

    Kiper, J. Richard

    2013-01-01

    For large organizations, updating instructional programs presents a challenge to keep abreast of constantly changing business processes and policies. Each time a process or policy changes, significant resources are required to locate and modify the training materials that convey the new content. Moreover, without the ability to track learning…

  4. A flexible insulator of a hollow SiO2 sphere and polyimide hybrid for flexible OLEDs.

    PubMed

    Kim, Min Kyu; Kim, Dong Won; Shin, Dong Wook; Seo, Sang Joon; Chung, Ho Kyoon; Yoo, Ji Beom

    2015-01-28

    The fabrication of interlayer dielectrics (ILDs) in flexible organic light-emitting diodes (OLEDs) not only requires flexible materials with a low dielectric constant, but also ones that possess the electrical, thermal, chemical, and mechanical properties required for optimal device performance. Porous polymer-silica hybrid materials were prepared to satisfy these requirements. Hollow SiO2 spheres were synthesized using atomic layer deposition (ALD) and a thermal calcination process. The hybrid film, which consists of hollow SiO2 spheres and polyimide, shows a low dielectric constant of 1.98 and excellent thermal stability up to 500 °C. After the bending test for 50 000 cycles, the porous hybrid film exhibits no degradation in its dielectric constant or leakage current. These results indicate that the hybrid film made up of hollow SiO2 spheres and polyimide (PI) is useful as a flexible insulator with a low dielectric constant and high thermal stability for flexible OLEDs.

  5. General RMP Guidance - Chapter 5: Management System

    EPA Pesticide Factsheets

    If you have at least one Program 2 or Program 3 process, you are required to develop a management system to oversee the implementation of the risk management program elements, and designate responsibility for making process safety a constant priority.

  6. The effects of alcohol on the driver's visual information processing.

    DOT National Transportation Integrated Search

    1980-09-01

    Twenty-seven male subjects were tested in a driving simulator to study the effects of alcohol on visual information processing and allocation of attention. Subjects were required to control heading angle, maintain a constant speed, search for critica...

  7. Theoretical study of thermodynamic properties and reaction rates of importance in the high-speed research program

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephen; Bauschlicher, Charles; Jaffe, Richard

    1992-01-01

    One of the primary goals of NASA's high-speed research program is to determine the feasibility of designing an environmentally safe commercial supersonic transport airplane. The largest environmental concern is focused on the amount of ozone destroying nitrogen oxides (NO(x)) that would be injected into the lower stratosphere during the cruise portion of the flight. The limitations placed on NO(x) emission require more than an order of magnitude reduction over current engine designs. To develop strategies to meet this goal requires first gaining a fundamental understanding of the combustion chemistry. To accurately model the combustor requires a computational fluid dynamics approach that includes both turbulence and chemistry. Since many of the important chemical processes in this regime involve highly reactive radicals, an experimental determination of the required thermodynamic data and rate constants is often very difficult. Unlike experimental approaches, theoretical methods are as applicable to highly reactive species as stable ones. Also our approximation of treating the dynamics classically becomes more accurate with increasing temperature. In this article we review recent progress in generating thermodynamic properties and rate constants that are required to understand NO(x) formation in the combustion process. We also describe our one-dimensional modeling efforts to validate an NH3 combustion reaction mechanism. We have been working in collaboration with researchers at LeRC, to ensure that our theoretical work is focused on the most important thermodynamic quantities and rate constants required in the chemical data base.

  8. Cognate Facilitation in Sentence Context--Translation Production by Interpreting Trainees and Non-Interpreting Trilinguals

    ERIC Educational Resources Information Center

    Lijewska, Agnieszka; Chmiel, Agnieszka

    2015-01-01

    Conference interpreters form a special case of language users because the simultaneous interpretation practice requires very specific lexical processing. Word comprehension and production in respective languages is performed under strict time constraints and requires constant activation of the involved languages. The present experiment aimed at…

  9. Robotic Variable Polarity Plasma Arc (VPPA) Welding

    NASA Technical Reports Server (NTRS)

    Jaffery, Waris S.

    1993-01-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  10. Robotic Variable Polarity Plasma Arc (VPPA) welding

    NASA Astrophysics Data System (ADS)

    Jaffery, Waris S.

    1993-02-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  11. Nurses' decision-making process in cases of physical restraint in acute elderly care: a qualitative study.

    PubMed

    Goethals, S; Dierckx de Casterlé, B; Gastmans, C

    2013-05-01

    The increasing vulnerability of patients in acute elderly care requires constant critical reflection in ethically charged situations such as when employing physical restraint. Qualitative evidence concerning nurses' decision making in cases of physical restraint is limited and fragmented. A thorough understanding of nurses' decision-making process could be useful to understand how nurses reason and make decisions in ethically laden situations. The aims of this study were to explore and describe nurses' decision-making process in cases of physical restraint. We used a qualitative interview design inspired by the Grounded Theory approach. Data analysis was guided by the Qualitative Analysis Guide of Leuven. Twelve hospitals geographically spread throughout the five provinces of Flanders, Belgium. Twenty-one acute geriatric nurses interviewed between October 2009 and April 2011 were purposively and theoretically selected, with the aim of including nurses having a variety of characteristics and experiences concerning decisions on using physical restraint. In cases of physical restraint in acute elderly care, nurses' decision making was never experienced as a fixed decision but rather as a series of decisions. Decision making was mostly reasoned upon and based on rational arguments; however, decisions were also made routinely and intuitively. Some nurses felt very certain about their decisions, while others experienced feelings of uncertainty regarding their decisions. Nurses' decision making is an independent process that requires nurses to obtain a good picture of the patient, to be constantly observant, and to assess and reassess the patient's situation. Coming to thoughtful and individualized decisions requires major commitment and constant critical reflection. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... = Concentration of epoxide in the reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect... process knowledge, reaction kinetics, and engineering knowledge, in accordance with paragraph (a)(2)(i) of...

  13. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect to epoxide concentration... measuring the concentration of the unreacted epoxide, or by using process knowledge, reaction kinetics, and...

  14. 40 CFR 63.1427 - Process vent requirements for processes using extended cookout as an epoxide emission reduction...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... = Concentration of epoxide in the reactor liquid at the beginning of the time period, weight percent. k = Reaction rate constant, 1/hr. t = Time, hours. Note: This equation assumes a first order reaction with respect... process knowledge, reaction kinetics, and engineering knowledge, in accordance with paragraph (a)(2)(i) of...

  15. Isolator-combustor interaction in a dual-mode scramjet engine

    NASA Technical Reports Server (NTRS)

    Pratt, David T.; Heiser, William H.

    1993-01-01

    A constant-area diffuser, or 'isolator', is required in both the ramjet and scramjet operating regimes of a dual-mode engine configuration in order to prevent unstarts due to pressure feedback from the combustor. Because the nature of the combustor-isolator interaction is different in the two operational modes, however, attention is presently given to the use of thermal vs kinetic energy coordinates for these interaction processes' visualization. The results of the analysis thus conducted indicate that the isolator requires severe flow separation at combustor entry, and that its entropy-generating characteristics are more severe than an equivalent oblique shock. A constant-area diffuser is only marginally able to contain the equivalent normal shock required for subsonic combustor entry.

  16. Integrated tools and techniques applied to the TES ground data system

    NASA Technical Reports Server (NTRS)

    Morrison, B. A.

    2000-01-01

    The author of this paper will dicuss the selection of CASE tools, a decision making process, requirements tracking and a review mechanism that leads to a highly integrated approach to software development that must deal with the constant pressure to change software requirements and design that is associated with research and development.

  17. Determination of the distribution constants of aromatic compounds and steroids in biphasic micellar phosphonium ionic liquid/aqueous buffer systems by capillary electrokinetic chromatography.

    PubMed

    Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K

    2013-09-20

    The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Strawberry puree processed by thermal, high pressure, or power ultrasound: Process energy requirements and quality modeling during storage.

    PubMed

    Sulaiman, Alifdalino; Farid, Mohammed; Silva, Filipa Vm

    2017-06-01

    Strawberry puree was processed for 15 min using thermal (65 ℃), high-pressure processing (600 MPa, 48 ℃), and ultrasound (24 kHz, 1.3 W/g, 33 ℃). These conditions were selected based on similar polyphenoloxidase inactivation (11%-18%). The specific energies required for the above-mentioned thermal, high-pressure processing, and power ultrasound processes were 240, 291, and 1233 kJ/kg, respectively. Then, the processed strawberry was stored at 3 ℃ and room temperature for 30 days. The constant pH (3.38±0.03) and soluble solids content (9.03 ± 0.25°Brix) during storage indicated a microbiological stability. Polyphenoloxidase did not reactivate during storage. The high-pressure processing and ultrasound treatments retained the antioxidant activity (70%-74%) better than the thermal process (60%), and high-pressure processing was the best treatment after 30 days of ambient storage to preserve antioxidant activity. Puree treated with ultrasound presented more color retention after processing and after ambient storage than the other preservation methods. For the three treatments, the changes of antioxidant activity and total color difference during storage were described by the fractional conversion model with rate constants k ranging between 0.03-0.09 and 0.06-0.22 day  - 1 , respectively. In resume, high-pressure processing and thermal processes required much less energy than ultrasound for the same polyphenoloxidase inactivation in strawberry. While high-pressure processing retained better the antioxidant activity of the strawberry puree during storage, the ultrasound treatment was better in terms of color retention.

  19. Coefficient of performance for a low-dissipation Carnot-like refrigerator with nonadiabatic dissipation

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Wu, Feifei; Ma, Yongli; He, Jizhou; Wang, Jianhui; Hernández, A. Calvo; Roco, J. M. M.

    2013-12-01

    We study the coefficient of performance (COP) and its bounds for a Carnot-like refrigerator working between two heat reservoirs at constant temperatures Th and Tc, under two optimization criteria χ and Ω. In view of the fact that an “adiabatic” process usually takes finite time and is nonisentropic, the nonadiabatic dissipation and the finite time required for the adiabatic processes are taken into account by assuming low dissipation. For given optimization criteria, we find that the lower and upper bounds of the COP are the same as the corresponding ones obtained from the previous idealized models where any adiabatic process is undergone instantaneously with constant entropy. To describe some particular models with very fast adiabatic transitions, we also consider the influence of the nonadiabatic dissipation on the bounds of the COP, under the assumption that the irreversible entropy production in the adiabatic process is constant and independent of time. Our theoretical predictions match the observed COPs of real refrigerators more closely than the ones derived in the previous models, providing a strong argument in favor of our approach.

  20. Light and melatonin schedule neuronal differentiation in the habenular nuclei

    PubMed Central

    de Borsetti, Nancy Hernandez; Dean, Benjamin J.; Bain, Emily J.; Clanton, Joshua A.; Taylor, Robert W.; Gamse, Joshua T.

    2011-01-01

    The formation of the embryonic brain requires the production, migration, and differentiation of neurons to be timely and coordinated. Coupling to the photoperiod could synchronize the development of neurons in the embryo. Here, we consider the effect of light and melatonin on the differentiation of embryonic neurons in zebrafish. We examine the formation of neurons in the habenular nuclei, a paired structure found near the dorsal surface of the brain adjacent to the pineal organ. Keeping embryos in constant darkness causes a temporary accumulation of habenular precursor cells, resulting in late differentiation and a long-lasting reduction in neuronal processes (neuropil). Because constant darkness delays the accumulation of the neurendocrine hormone melatonin in embryos, we looked for a link between melatonin signaling and habenular neurogenesis. A pharmacological block of melatonin receptors delays neurogenesis and reduces neuropil similarly to constant darkness, while addition of melatonin to embryos in constant darkness restores timely neurogenesis and neuropil. We conclude that light and melatonin schedule the differentiation of neurons and the formation of neural processes in the habenular nuclei. PMID:21840306

  1. Constant-Pressure Combustion Charts Including Effects of Diluent Addition

    NASA Technical Reports Server (NTRS)

    Turner, L Richard; Bogart, Donald

    1949-01-01

    Charts are presented for the calculation of (a) the final temperatures and the temperature changes involved in constant-pressure combustion processes of air and in products of combustion of air and hydrocarbon fuels, and (b) the quantity of hydrocarbon fuels required in order to attain a specified combustion temperature when water, alcohol, water-alcohol mixtures, liquid ammonia, liquid carbon dioxide, liquid nitrogen, liquid oxygen, or their mixtures are added to air as diluents or refrigerants. The ideal combustion process and combustion with incomplete heat release from the primary fuel and from combustible diluents are considered. The effect of preheating the mixture of air and diluents and the effect of an initial water-vapor content in the combustion air on the required fuel quantity are also included. The charts are applicable only to processes in which the final mixture is leaner than stoichiometric and at temperatures where dissociation is unimportant. A chart is also included to permit the calculation of the stoichiometric ratio of hydrocarbon fuel to air with diluent addition. The use of the charts is illustrated by numerical examples.

  2. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  3. Velocity storage contribution to vestibular self-motion perception in healthy human subjects.

    PubMed

    Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A

    2011-01-01

    Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.

  4. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes

    PubMed Central

    Buis, Arjan

    2016-01-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm – Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable. PMID:27695626

  5. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes.

    PubMed

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-06-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.

  6. Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.

    PubMed

    Chatterjee, Abhijit; Voter, Arthur F

    2010-05-21

    We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.

  7. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  8. Minimizing the area required for time constants in integrated circuits

    NASA Technical Reports Server (NTRS)

    Lyons, J. C.

    1972-01-01

    When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.

  9. Process Dynamics and Control, a Theory-Experiential Approach.

    ERIC Educational Resources Information Center

    Perna, A. J.; And Others

    A required senior-level chemical engineering course at Colorado State University is described. The first nine weeks are devoted to the theory portion of the course, which includes the following topics: LaPlace transformations and time constants, block diagrams, inverse transformations, linearization, frequency response analysis, graphical…

  10. Practicing Improvisation: Preparing Multicultural Educators

    ERIC Educational Resources Information Center

    Hull, Karla

    2015-01-01

    Preparing competent multicultural educators involves a dynamic process requiring constant self-reflection and assisting pre-service teachers to sharpen their cultural vision as they learn to be responsive educators. Reflections on lessons learned as a teacher educator are shared through personal experiences that are identified as keys to prepare…

  11. Measurements of the time constant for steady ionization in shaped-charge barium releases

    NASA Technical Reports Server (NTRS)

    Hoch, Edward L.; Hallinan, Thomas J.

    1993-01-01

    Quantitative measurements of three solar illuminated shaped-charge barium releases injected at small angles to the magnetic field were made using a calibrated color television camera. Two of the releases were from 1989. The third release, a reanalysis of an event included in Hallinan's 1988 study of three 1986 releases, was included to provide continuity between the two studies. Time constants for ionization, measured during the first 25 s of each release, were found to vary considerably. The two 1989 time constants differed substantially, and both were significantly less than any of the 1986 time constants. On the basis of this variability, we conclude that the two 1989 releases showed evidence of continuous nonsolar ionization. One release showed nonsolar ionization which could not he attributed to Alfven's critical ionization velocity process, which requires a component of velocity perpendicular to the magnetic field providing a perpendicular energy greater than the ionization potential.

  12. Proposed Interoperability Readiness Level Assessment for Mission Critical Interfaces During Navy Acquisition

    DTIC Science & Technology

    2010-12-01

    This involves zeroing and recreating the interoperability arrays and other variables used in the simulation. Since the constants do not change from run......Using this algorithm, the process of encrypting/decrypting data requires very little computation, and the generation of the random pads can be

  13. Adapting to Changing Memory Retrieval Demands: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Benoit, Roland G.; Werkle-Bergner, Markus; Mecklinger, Axel; Kray, Jutta

    2009-01-01

    This study investigated preparatory processes involved in adapting to changing episodic memory retrieval demands. Event-related potentials (ERPs) were recorded while participants performed a general old/new recognition task and a specific task that also required retrieval of perceptual details. The relevant task remained either constant or changed…

  14. Heat pipes. [technology utilization

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The development and use of heat pipes are described, including space requirements and contributions. Controllable heat pipes, and designs for automatically maintaining a selected constant temperature, are discussed which would add to the versatility and usefulness of heat pipes in industrial processing, manufacture of integrated circuits, and in temperature stabilization of electronics.

  15. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Data sets selected for mini-batches and the software modifications required for processing these sets are described. Initial analysis was performed on minibatch field model recovery. Studies are being performed to examine the convergence of the solutions and the maximum expansion order the data will support in the constant and secular terms.

  16. In Vitro Measurements of Metabolism for Application in Pharmacokinetic Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipscomb, John C.; Poet, Torka S.

    2008-04-01

    Abstract Human risk and exposure assessments require dosimetry information. Species-specific tissue dose response will be driven by physiological and biochemical processes. While metabolism and pharmacokinetic data are often not available in humans, they are much more available in laboratory animals; metabolic rate constants can be readily derived in vitro. The physiological differences between laboratory animals and humans are known. Biochemical processes, especially metabolism, can be measured in vitro and extrapolated to account for in vivo metabolism through clearance models or when linked to a physiologically based biological (PBPK) model to describe the physiological processes, such as drug delivery to themore » metabolic organ. This review focuses on the different organ, cellular, and subcellular systems that can be used to measure in vitro metabolic rate constants and how that data is extrapolated to be used in biokinetic modeling.« less

  17. Low-Dielectric Polyimides

    NASA Technical Reports Server (NTRS)

    St. Clair, Anne K.; St. Clair, Terry L.; Winfree, William P.; Emerson, Bert R., Jr.

    1989-01-01

    New process developed to produce aromatic condensation polyimide films and coatings having dielectric constants in range of 2.4 to 3.2. Materials better electrical insulators than state-of-the-art commercial polyimides. Several low-dielectric-constant polyimides have excellent resistance to moisture. Useful as film and coating materials for both industrial and aerospace applications where high electrical insulation, resistance to moisture, mechanical strength, and thermal stability required. Applicable to production of high-temperature and moisture-resistance adhesives, films, photoresists, and coatings. Electronic applications include printed-circuit boards, both of composite and flexible-film types and potential use in automotive, aerospace, and electronic industries.

  18. Resist Parameter Extraction from Line-and-Space Patterns of Chemically Amplified Resist for Extreme Ultraviolet Lithography

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Oizumi, Hiroaki; Itani, Toshiro; Tagawa, Seiichi

    2010-11-01

    The development of extreme ultraviolet (EUV) lithography has progressed owing to worldwide effort. As the development status of EUV lithography approaches the requirements for the high-volume production of semiconductor devices with a minimum line width of 22 nm, the extraction of resist parameters becomes increasingly important from the viewpoints of the accurate evaluation of resist materials for resist screening and the accurate process simulation for process and mask designs. In this study, we demonstrated that resist parameters (namely, quencher concentration, acid diffusion constant, proportionality constant of line edge roughness, and dissolution point) can be extracted from the scanning electron microscopy (SEM) images of patterned resists without the knowledge on the details of resist contents using two types of latest EUV resist.

  19. Design of Linear-Quadratic-Regulator for a CSTR process

    NASA Astrophysics Data System (ADS)

    Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.

    2017-11-01

    This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.

  20. IT Acquisition: Expediting the Process to Deliver Business Capabilities to the DoD Enterprise. Revised Edition

    DTIC Science & Technology

    2012-07-01

    effectively manage delivery of information capabilities. Under IT 360, they will need to incorporate constantly evolving, market -driven commercial systems...traditional acquisition system; under IT 360, these processes are largely obsolete and create oversight ambiguities. • Congress requires that funds be...2004). Furthermore, because the product is not available on the commercial market , the development of any complementary updates will also need to be

  1. A New Microelectronics Curriculum Created by Synopsys, Inc.

    ERIC Educational Resources Information Center

    Goldman, Rich; Bartleson, Karen; Wood, Troy; Melikyan, Vazgen; Wang, Zhi-hua; Chen, Lan

    2009-01-01

    Rapid changes in integrated circuits (IC) technology and constantly shrinking process geometries demand a new curriculum that meets the contemporary requirements for IC design. This is especially important for 90nm and below technologies and the use of state-of-the-art EDA design tools and advanced IC design techniques. The creation of new…

  2. Enterocyte loss of polarity and gut wound healing rely upon the F-actin-severing function of villin

    USDA-ARS?s Scientific Manuscript database

    Efficient wound healing is required to maintain the integrity of the intestinal epithelial barrier because of its constant exposure to a large variety of environmental stresses. This process implies a partial cell depolarization and the acquisition of a motile phenotype that involves rearrangements ...

  3. 6 Keys to Identity Management

    ERIC Educational Resources Information Center

    Shoham, Idan

    2011-01-01

    An Identity and Access Management (IAM) project on campus can feel like a Sisyphean task: Just when access rights have finally been sorted out, the semester ends--and users change roles, leave campus, or require new processes. IT departments face a constantly changing technical landscape: (1) integrating new applications and retiring old ones; (2)…

  4. Teacher Collaborative Curriculum Design in Technical Vocational Colleges: A Strategy for Maintaining Curriculum Consistency?

    ERIC Educational Resources Information Center

    Albashiry, Nabeel M.; Voogt, Joke M.; Pieters, Jules M.

    2015-01-01

    The Technical Vocational Education and Training (TVET) curriculum requires continuous renewal and constant involvement of stakeholders in the redesign process. Due to a lack of curriculum design expertise, TVET institutions in developing contexts encounter challenges maintaining and advancing the quality and relevance of their programmes to the…

  5. The discrete Fourier transform algorithm for determining decay constants—Implementation using a field programmable gate array

    NASA Astrophysics Data System (ADS)

    Bostrom, G.; Atkinson, D.; Rice, A.

    2015-04-01

    Cavity ringdown spectroscopy (CRDS) uses the exponential decay constant of light exiting a high-finesse resonance cavity to determine analyte concentration, typically via absorption. We present a high-throughput data acquisition system that determines the decay constant in near real time using the discrete Fourier transform algorithm on a field programmable gate array (FPGA). A commercially available, high-speed, high-resolution, analog-to-digital converter evaluation board system is used as the platform for the system, after minor hardware and software modifications. The system outputs decay constants at maximum rate of 4.4 kHz using an 8192-point fast Fourier transform by processing the intensity decay signal between ringdown events. We present the details of the system, including the modifications required to adapt the evaluation board to accurately process the exponential waveform. We also demonstrate the performance of the system, both stand-alone and incorporated into our existing CRDS system. Details of FPGA, microcontroller, and circuitry modifications are provided in the Appendix and computer code is available upon request from the authors.

  6. The controlled growth of GaN nanowires.

    PubMed

    Hersee, Stephen D; Sun, Xinyu; Wang, Xin

    2006-08-01

    This paper reports a scalable process for the growth of high-quality GaN nanowires and uniform nanowire arrays in which the position and diameter of each nanowire is precisely controlled. The approach is based on conventional metalorganic chemical vapor deposition using regular precursors and requires no additional metal catalyst. The location, orientation, and diameter of each GaN nanowire are controlled using a thin, selective growth mask that is patterned by interferometric lithography. It was found that use of a pulsed MOCVD process allowed the nanowire diameter to remain constant after the nanowires had emerged from the selective growth mask. Vertical GaN nanowire growth rates in excess of 2 mum/h were measured, while remarkably the diameter of each nanowire remained constant over the entire (micrometer) length of the nanowires. The paper reports transmission electron microscopy and photoluminescence data.

  7. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  8. Acoustic emission of rock mass under the constant-rate fluid injection

    NASA Astrophysics Data System (ADS)

    Shadrin Klishin, AV, VI

    2018-03-01

    The authors study acoustic emission in coal bed and difficult-to-cave roof under injection of fluid by pumps at a constant rate. The functional connection between the roof hydrofracture length and the total number of AE pulses is validated, it is also found that the coal bed hydroloosening time, injection rate and time behavior of acoustic emission activity depend on the fluid injection volume required until the fluid breakout in a roadway through growing fractures. In the formulas offered for the practical application, integral parameters that characterize permeability and porosity of rock mass and process parameters of the technology are found during test injection.

  9. Design of a unit to produce hot distilled water for the same power consumption as a water heater

    NASA Technical Reports Server (NTRS)

    Bambenek, R. A.; Nuccio, P. P.

    1973-01-01

    Unit recovers 97% of water contained in pretreated waste water. Some factors are: cleansing agent prevents fouling of heat transfer surface by highly concentrated waste; absence of dynamic seals reduces required purge gas flow rate; and recycle loop maintains constant flushing process to carry cleansing agent across evaporation surface.

  10. Typology of Strategies of Personality Meaning-Making during Professional Education

    ERIC Educational Resources Information Center

    Shchipanova, Dina Ye.; Lebedeva, Ekaterina V.; Sukhinin, Valentin P.; Valieva, Elizaveta N.

    2016-01-01

    The importance of the studied issue is conditioned by the fact that high dynamic of processes in the labour market requires constant work of an individual on self-determination and search for significance of his/her professional activity. The purpose of research is theoretical development and empirical verification of the types of strategies of…

  11. The Endurance of Children's Working Memory: A Recall Time Analysis

    ERIC Educational Resources Information Center

    Towse, John N.; Hitch, Graham J.; Hamilton, Z.; Pirrie, Sarah

    2008-01-01

    We analyze the timing of recall as a source of information about children's performance in complex working memory tasks. A group of 8-year-olds performed a traditional operation span task in which sequence length increased across trials and an operation period task in which processing requirements were extended across trials of constant sequence…

  12. Shock wave calibration of under-expanded natural gas fuel jets

    NASA Astrophysics Data System (ADS)

    White, T. R.; Milton, B. E.

    2008-10-01

    Natural gas, a fuel abundant in nature, cannot be used by itself in conventional diesel engines because of its low cetane number. However, it can be used as the primary fuel with ignition by a pilot diesel spray. This is called dual-fuelling. The gas may be introduced either into the inlet manifold or, preferably, directly into the cylinder where it is injected as a short duration, intermittent, sonic jet. For accurate delivery in the latter case, a constant flow-rate from the injector is required into the constantly varying pressure in the cylinder. Thus, a sonic (choked) jet is required which is generally highly under-expanded. Immediately at the nozzle exit, a shock structure develops which can provide essential information about the downstream flow. This shock structure, generally referred to as a “barrel” shock, provides a key to understanding the full injection process. It is examined both experimentally and numerically in this paper.

  13. Economics of food irradiation

    NASA Astrophysics Data System (ADS)

    Kunstadt, Peter; Eng, P.; Steeves, Colyn; Beaulieu, Daniel; Eng, P.

    1993-07-01

    The number of products being radiation processed worldwide is constantly increasing and today includes such diverse items as medical disposables, fruits and vegetables, spices, meats, seafoods and waste products. This range of products to be processed has resulted in a wide range of irradiator designs and capital and operating cost requirements. This paper discusses the economics of low dose food irradiation applications and the effects of various parameters on unit processing costs. It provides a model for calculating specific unit processing costs by correlating known capital costs with annual operating costs and annual throughputs. It is intended to provide the reader with a general knowledge of how unit processing costs are derived.

  14. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    PubMed

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Evolution of heavy-element abundances in the Galactic halo and disk

    NASA Technical Reports Server (NTRS)

    Mathews, G. J.; Cowan, J. J.; Schramm, D. N.

    1988-01-01

    The constraints on the universal energy density and cosmological constant from cosmochronological ages and the Hubble age are reviewed. Observational evidence for the galactic chemical evolution of the heavy-element chronometers is descirbed in the context of numerical models. The viability of the recently discovered Th/Nd stellar chronometer is discussed, along with the suggestion that high r-process abundances in metal-poor stars may have resulted from a primordial r-process, as may be required by some inhomogeneous cosmologies.

  16. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    PubMed

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Effect of vacuum-ultraviolet irradiation on the dielectric constant of low-k organosilicate dielectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, H.; Shohet, J. L.; Ryan, E. T.

    2014-11-17

    Vacuum ultraviolet (VUV) irradiation is generated during plasma processing in semiconductor fabrications, while the effect of VUV irradiation on the dielectric constant (k value) of low-k materials is still an open question. To clarify this problem, VUV photons with a range of energies were exposed on low-k organosilicate dielectrics (SiCOH) samples at room temperature. Photon energies equal to or larger than 6.0 eV were found to decrease the k value of SiCOH films. VUV photons with lower energies do not have this effect. This shows the need for thermal heating in traditional ultraviolet (UV) curing since UV light sources do notmore » have sufficient energy to change the dielectric constant of SiCOH and additional energy is required from thermal heating. In addition, 6.2 eV photon irradiation was found to be the most effective in decreasing the dielectric constant of low-k organosilicate films. Fourier Transform Infra-red Spectroscopy shows that these 6.2 eV VUV exposures removed organic porogens. This contributes to the decrease of the dielectric constant. This information provides the range of VUV photon energies that could decrease the dielectric constant of low-k materials most effectively.« less

  18. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  19. Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement

    PubMed Central

    Petricevic, Slobodan J.; Mihailovic, Pedja M.

    2016-01-01

    Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043

  20. [Calcium and vitamin D in bone metabolism: Clinical importance for fracture treatment].

    PubMed

    Amling, M

    2015-12-01

    A balanced calcium homeostasis is of critical importance not only for bone remodeling, the physiological process of bone resorption and bone formation that constantly renews bone throughout life but also for normal fracture healing. Given that disturbances of calcium homeostasis are present in 50 % of the German population and that this might result in delayed fracture healing after correct surgical treatment, this paper focusses on calcium and vitamin D in the daily practice in orthopedics and trauma surgery. To ensure the required enteral calcium uptake the following three conditions are required: (1) sufficient calcium intake via the nutrition, (2) a 25-hydroxyvitamin D serum level > 30 µg/l and (3) the presence of sufficient gastric acidification. Given the endemic vitamin D deficiency in Germany as well as the constantly increasing number of people using proton pump inhibitors on a regular basis, it is necessary to closely connect trauma orthopedic surgery and osteological treatment. The first issue to be dealt with is to control and if needed normalize calcium homeostasis in order to allow a normal undisturbed fracture healing process after both conservative as well as operative treatment of fractures.

  1. Development of linear free energy relationships for aqueous phase radical-involved chemical reactions.

    PubMed

    Minakata, Daisuke; Mezyk, Stephen P; Jones, Jace W; Daws, Brittany R; Crittenden, John C

    2014-12-02

    Aqueous phase advanced oxidation processes (AOPs) produce hydroxyl radicals (HO•) which can completely oxidize electron rich organic compounds. The proper design and operation of AOPs require that we predict the formation and fate of the byproducts and their associated toxicity. Accordingly, there is a need to develop a first-principles kinetic model that can predict the dominant reaction pathways that potentially produce toxic byproducts. We have published some of our efforts on predicting the elementary reaction pathways and the HO• rate constants. Here we develop linear free energy relationships (LFERs) that predict the rate constants for aqueous phase radical reactions. The LFERs relate experimentally obtained kinetic rate constants to quantum mechanically calculated aqueous phase free energies of activation. The LFERs have been applied to 101 reactions, including (1) HO• addition to 15 aromatic compounds; (2) addition of molecular oxygen to 65 carbon-centered aliphatic and cyclohexadienyl radicals; (3) disproportionation of 10 peroxyl radicals, and (4) unimolecular decay of nine peroxyl radicals. The LFERs correlations predict the rate constants within a factor of 2 from the experimental values for HO• reactions and molecular oxygen addition, and a factor of 5 for peroxyl radical reactions. The LFERs and the elementary reaction pathways will enable us to predict the formation and initial fate of the byproducts in AOPs. Furthermore, our methodology can be applied to other environmental processes in which aqueous phase radical-involved reactions occur.

  2. Enabling Healthcare IT Governance: Human Task Management Service for Administering Emergency Department's Resources for Efficient Patient Flow.

    PubMed

    Rodriguez, Salvador; Aziz, Ayesha; Chatwin, Chris

    2014-01-01

    The use of Health Information Technology (HIT) to improve healthcare service delivery is constantly increasing due to research advances in medical science and information systems. Having a fully automated process solution for a Healthcare Organization (HCO) requires a combination of organizational strategies along with a selection of technologies that facilitate the goal of improving clinical outcomes. HCOs, requires dynamic management of care capability to realize the full potential of HIT. Business Process Management (BPM) is being increasingly adopted to streamline the healthcare service delivery and management processes. Emergency Departments (EDs) provide a case in point, which require multidisciplinary resources and services to deliver effective clinical outcomes. Managed care involves the coordination of a range of services in an ED. Although fully automated processes in emergency care provide a cutting edge example of service delivery, there are many situations that require human interactions with the computerized systems; e.g. Medication Approvals, care transfer, acute patient care. This requires a coordination mechanism for all the resources, computer and human, to work side by side to provide the best care. To ensure evidence-based medical practice in ED, we have designed a Human Task Management service to model the process of coordination of ED resources based on the UK's NICE Clinical guideline for managing the care of acutely ill patients. This functionality is implemented using Java Business process Management (jBPM).

  3. Effects of oxygen deficiency on the transport and dielectric properties of NdSrNbO

    NASA Astrophysics Data System (ADS)

    Hzez, W.; Benali, A.; Rahmouni, H.; Dhahri, E.; Khirouni, K.; Costa, B. F. O.

    2018-06-01

    In the present study, Nd0.7Sr0.3NbO3-y (y = 0.1, 0.15, 0.2) compounds were prepared via a solid-solid reaction route. The prepared samples were characterized by electrochemical impedance spectroscopy in order to establish the effects of temperature, frequency, and oxygen vacancies on both the transport and dielectric properties of NdSrNbO. We found that both the electrical and dielectric properties were highly sensitive to the concentration of oxygen vacancies. The conduction mechanism data were explained well according to the Mott model and adiabatic small polaronic hopping model. Electrochemical impedance spectroscopy analysis showed that one relaxation process was present in the Nd0.7Sr0.3NbO2.9 system whereas two relaxation processes were observed in the Nd0.7Sr0.3NbO2.85 and Nd0.7Sr0.3NbO2.8 systems, where the latter behavior indicated the presence of many active regions (due to the contributions of different microstructures). The temperature and frequency dependences of the dielectric constant confirmed the contributions of different polarization mechanisms. In particular, the high dielectric constant values at low frequencies and high temperatures were mainly related to the presence of different Schottky barriers, whereas the low dielectric constant values at high frequencies were essentially related to the intrinsic effect. The constant dielectric values obtained for the samples are greater than those in the NdSrFeO system, which makes them interesting materials for use in applications that require high dielectric constants.

  4. Laser inactivation of pathogenic viruses in water

    NASA Astrophysics Data System (ADS)

    Grishkanich, Alexander; Zhevlakov, Alexander; Kascheev, Sergey; Sidorov, Igor; Ruzankina, Julia; Yakovlev, Alexey; Mak, Andrey

    2016-03-01

    Currently there is a situation that makes it difficult to provide the population with quality drinking water for the sanitary-hygienic requirements. One of the urgent problems is the need for water disinfection. Since the emergence of microorganisms that are pathogens transmitted through water such as typhoid, cholera, etc. requires constant cleansing of waters against pathogenic bacteria. In the water treatment process is destroyed up to 98% of germs, but among the remaining can be pathogenic viruses, the destruction of which requires special handling. As a result, the conducted research the following methods have been proposed for combating harmful microorganisms: sterilization of water by laser radiation and using a UV lamp.

  5. Research on aspheric focusing lens processing and testing technology in the high-energy laser test system

    NASA Astrophysics Data System (ADS)

    Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan

    2014-08-01

    In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.

  6. User-centered requirements engineering in health information systems: a study in the hemophilia field.

    PubMed

    Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa

    2012-06-01

    The use of sophisticated information and communication technologies (ICTs) in the health care domain is a way to improve the quality of services. However, there are also hazards associated with the introduction of ICTs in this domain and a great number of projects have failed due to the lack of systematic consideration of human and other non-technology issues throughout the design or implementation process, particularly in the requirements engineering process. This paper presents the methodological approach followed in the design process of a web-based information system (WbIS) for managing the clinical information in hemophilia care, which integrates the values and practices of user-centered design (UCD) activities into the principles of software engineering, particularly in the phase of requirements engineering (RE). This process followed a paradigm that combines a grounded theory for data collection with an evolutionary design based on constant development and refinement of the generic domain model using three well-known methodological approaches: (a) object-oriented system analysis; (b) task analysis; and, (c) prototyping, in a triangulation work. This approach seems to be a good solution for the requirements engineering process in this particular case of the health care domain, since the inherent weaknesses of individual methods are reduced, and emergent requirements are easier to elicit. Moreover, the requirements triangulation matrix gives the opportunity to look across the results of all used methods and decide what requirements are critical for the system success. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Diagnosis of the Computer-Controlled Milling Machine, Definition of the Working Errors and Input Corrections on the Basis of Mathematical Model

    NASA Astrophysics Data System (ADS)

    Starikov, A. I.; Nekrasov, R. Yu; Teploukhov, O. J.; Soloviev, I. V.; Narikov, K. A.

    2016-10-01

    Manufactures, machinery and equipment improve of constructively as science advances and technology, and requirements are improving of quality and longevity. That is, the requirements for surface quality and precision manufacturing, oil and gas equipment parts are constantly increasing. Production of oil and gas engineering products on modern machine tools with computer numerical control - is a complex synthesis of technical and electrical equipment parts, as well as the processing procedure. Technical machine part wears during operation and in the electrical part are accumulated mathematical errors. Thus, the above-mentioned disadvantages of any of the following parts of metalworking equipment affect the manufacturing process of products in general, and as a result lead to the flaw.

  8. Boundaries of the Realizability Region of Membrane Separation Processes

    NASA Astrophysics Data System (ADS)

    Tsirlin, A. M.; Akhrenemkov, A. A.

    2018-01-01

    The region of realizability of membrane separation systems having a constant total membrane area has been determined for a definite output of a final product at a definite composition of a mixture flow. The law of change in the pressure in the mixture, corresponding to the minimum energy required for its separation, was concretized for media close in properties to ideal gases and solutions.

  9. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  10. Linear free energy relationships between aqueous phase hydroxyl radical reaction rate constants and free energy of activation.

    PubMed

    Minakata, Daisuke; Crittenden, John

    2011-04-15

    The hydroxyl radical (HO(•)) is a strong oxidant that reacts with electron-rich sites on organic compounds and initiates complex radical chain reactions in aqueous phase advanced oxidation processes (AOPs). Computer based kinetic modeling requires a reaction pathway generator and predictions of associated reaction rate constants. Previously, we reported a reaction pathway generator that can enumerate the most important elementary reactions for aliphatic compounds. For the reaction rate constant predictor, we develop linear free energy relationships (LFERs) between aqueous phase literature-reported HO(•) reaction rate constants and theoretically calculated free energies of activation for H-atom abstraction from a C-H bond and HO(•) addition to alkenes. The theoretical method uses ab initio quantum mechanical calculations, Gaussian 1-3, for gas phase reactions and a solvation method, COSMO-RS theory, to estimate the impact of water. Theoretically calculated free energies of activation are found to be within approximately ±3 kcal/mol of experimental values. Considering errors that arise from quantum mechanical calculations and experiments, this should be within the acceptable errors. The established LFERs are used to predict the HO(•) reaction rate constants within a factor of 5 from the experimental values. This approach may be applied to other reaction mechanisms to establish a library of rate constant predictions for kinetic modeling of AOPs.

  11. The Proell Effect: A Macroscopic Maxwell's Demon

    NASA Astrophysics Data System (ADS)

    Rauen, Kenneth M.

    2011-12-01

    Maxwell's Demon is a legitimate challenge to the Second Law of Thermodynamics when the "demon" is executed via the Proell effect. Thermal energy transfer according to the Kinetic Theory of Heat and Statistical Mechanics that takes place over distances greater than the mean free path of a gas circumvents the microscopic randomness that leads to macroscopic irreversibility. No information is required to sort the particles as no sorting occurs; the entire volume of gas undergoes the same transition. The Proell effect achieves quasi-spontaneous thermal separation without sorting by the perturbation of a heterogeneous constant volume system with displacement and regeneration. The classical analysis of the constant volume process, such as found in the Stirling Cycle, is incomplete and therefore incorrect. There are extra energy flows that classical thermo does not recognize. When a working fluid is displaced across a regenerator with a temperature gradient in a constant volume system, complimentary compression and expansion work takes place that transfers energy between the regenerator and the bulk gas volumes of the hot and cold sides of the constant volume system. Heat capacity at constant pressure applies instead of heat capacity at constant volume. The resultant increase in calculated, recyclable energy allows the Carnot Limit to be exceeded in certain cycles. Super-Carnot heat engines and heat pumps have been designed and a US patent has been awarded.

  12. A feasibility study and mission analysis for the Hybrid Plume Plasma Rocket

    NASA Technical Reports Server (NTRS)

    Sullivan, Daniel J.; Micci, Michael M.

    1990-01-01

    The Hybrid Plume Plasma Rocket (HPPR) is a high power electric propulsion concept which is being developed at the MIT Plasma Fusion Center. This paper presents a theoretical overview of the concept as well as the results and conclusions of an independent study which has been conducted to identify and categorize those technologies which require significant development before the HPPR can be considered a viable electric propulsion device. It has been determined that the technologies which require the most development are high power radio-frequency and microwave generation for space applications and the associated power processing units, low mass superconducting magnets, a reliable, long duration, multi-megawatt space nuclear power source, and long term storage of liquid hydrogen propellant. In addition to this, a mission analysis of a one-way transfer from low earth orbit (LEO) to Mars indicates that a constant acceleration thrust profile, which can be obtained using the HPPR, results in faster trip times and greater payload capacities than those afforded by more conventional constant thrust profiles.

  13. Linear-phase delay filters for ultra-low-power signal processing in neural recording implants.

    PubMed

    Gosselin, Benoit; Sawan, Mohamad; Kerherve, Eric

    2010-06-01

    We present the design and implementation of linear-phase delay filters for ultra-low-power signal processing in neural recording implants. We use these filters as low-distortion delay elements along with an automatic biopotential detector to perform integral waveform extraction and efficient power management. The presented delay elements are realized employing continuous-time OTA-C filters featuring 9th-order equiripple transfer functions with constant group delay. Such analog delay enables processing neural waveforms with reduced overhead compared to a digital delay since it does not requires sampling and digitization. It uses an allpass transfer function for achieving wider constant-delay bandwidth than all-pole does. Two filters realizations are compared for implementing the delay element: the Cascaded structure and the Inverse follow-the-leader feedback filter. Their respective strengths and drawbacks are assessed by modeling parasitics and non-idealities of OTAs, and by transistor-level simulations. A budget of 200 nA is used in both filters. Experimental measurements with the chosen filter topology are presented and discussed.

  14. Scramjet fuel injector design parameters and considerations: Development of a two-dimensional tangential fuel injector with constant pressure at the flame

    NASA Technical Reports Server (NTRS)

    Agnone, A. M.

    1972-01-01

    The factors affecting a tangential fuel injector design for scramjet operation are reviewed and their effect on the efficiency of the supersonic combustion process is evaluated using both experimental data and theoretical predictions. A description of the physical problem of supersonic combustion and method of analysis is followed by a presentation and evaluation of some standard and exotic types of fuel injectors. Engineering fuel injector design criteria and hydrogen ignition schemes are presented along with a cursory review of available experimental data. A two-dimensional tangential fuel injector design is developed using analyses as a guide in evaluating the effects on the combustion process of various initial and boundary conditions including splitter plate thickness, injector wall temperature, pressure gradients, etc. The fuel injector wall geometry is shaped so as to maintain approximately constant pressure at the flame as required by a cycle analysis. A viscous characteristics program which accounts for lateral as well as axial pressure variations due to the mixing and combustion process is used in determining the wall geometry.

  15. Numerical Characterization of Piezoceramics Using Resonance Curves

    PubMed Central

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  16. Numerical Characterization of Piezoceramics Using Resonance Curves.

    PubMed

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-27

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  17. Correlations between the disintegration of melt and the measured impulses in steam explosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froehlich, G.; Linca, A.; Schindler, M.

    To find our correlations in steam explosions (melt water interactions) between the measured impulses and the disintegration of the melt, experiments were performed in three configurations i.e. stratified, entrapment and jet experiments. Linear correlations were detected between the impulse and the total surface of the fragments. Theoretical considerations point out that a linear correlation assumes superheating of a water layer around the fragments of a constant thickness during the fragmentation process to a constant temperature (here the homogeneous nucleation temperature of water was assumed) and a constant expansion velocity of the steam in the main expansion time. The correlation constantmore » does not depend on melt temperature and trigger pressure, but it depends on the configuration of the experiment or of a scenario of an accident. Further research is required concerning the correlation constant. For analysing steam explosion accidents the explosivity is introduced. The explosivity is a mass specific impulse. The explosivity is linear correlated with the degree of fragmentation. Knowing the degree of fragmentation with proper correlation constant the explosivity can be calculated and from the explosivity combined with the total mass of fragments the impulse is obtained which can be used to an estimation of the maximum force.« less

  18. Piezoelectric Behaviour of Sputtered Aluminium Nitride Thin Film for High Frequency Ultrasonic Sensors

    NASA Astrophysics Data System (ADS)

    Herzog, T.; Walter, S.; Bartzsch, H.; Gittner, M.; Gloess, D.; Heuer, H.

    2011-06-01

    Many new materials and processes require non destructive evaluation in higher resolutions by phased array ultrasonic techniques in a frequency range up to 250 MHz. This paper presents aluminium nitride, a promising material for the use as a piezoelectric sensor material in the considered frequency range, which contains the potential for high frequency phased array application in the future. This work represents the fundamental development of piezoelectric aluminium nitride films with a thickness of up to 10 μm. We have investigated and optimized the deposition process of the aluminium nitride thin film layers regarding their piezoelectric behavior. Therefore a specific test setup and a measuring station were created to determine the piezoelectric charge constant (d33) and the electro acoustic behavior of the sensor. Single element transducers were deposited on silicon substrates with aluminium electrodes for top and bottom, using different parameters for the magnetron sputter process, like pressure and bias voltage. Afterwards acoustical measurements up to 500 MHz in pulse echo mode have been carried out and the electrical and electromechanical properties were qualified. In two different parameter sets for the sputtering process excellent piezoelectric charge constant of about 8.0 pC/N maximum were obtained.

  19. LR: Compact connectivity representation for triangle meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurung, T; Luffel, M; Lindstrom, P

    2011-01-28

    We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators formore » traversing a mesh, and show that LR often saves both space and traversal time over competing representations.« less

  20. Quark mass variations of nuclear forces, BBN, and all that

    NASA Astrophysics Data System (ADS)

    Meissner, Ulf-G.

    2014-03-01

    In this talk, I discuss the modifications of the nuclear forces due to variations of the light quark masses and of the fine structure constant. This is based on the chiral nuclear effective field theory, that successfully describes a large body of data. The generation of the light elements in the Big Bang Nucleosynthesis provides important constraints on these modifications. In addition, I discuss the role of the anthropic principle in the triple-alpha process that underlies carbon and oxygen generation in hot stars. It appears that a fine-tuning of the quark masses and the fine structure constant within 2 to 3 per cent is required to make life on Earth viable. Supported in part by DFG, HGF and the BMBF.

  1. Noncontact conductivity and dielectric measurement for high throughput roll-to-roll nanomanufacturing

    NASA Astrophysics Data System (ADS)

    Orloff, Nathan D.; Long, Christian J.; Obrzut, Jan; Maillaud, Laurent; Mirri, Francesca; Kole, Thomas P.; McMichael, Robert D.; Pasquali, Matteo; Stranick, Stephan J.; Alexander Liddle, J.

    2015-11-01

    Advances in roll-to-roll processing of graphene and carbon nanotubes have at last led to the continuous production of high-quality coatings and filaments, ushering in a wave of applications for flexible and wearable electronics, woven fabrics, and wires. These applications often require specific electrical properties, and hence precise control over material micro- and nanostructure. While such control can be achieved, in principle, by closed-loop processing methods, there are relatively few noncontact and nondestructive options for quantifying the electrical properties of materials on a moving web at the speed required in modern nanomanufacturing. Here, we demonstrate a noncontact microwave method for measuring the dielectric constant and conductivity (or geometry for samples of known dielectric properties) of materials in a millisecond. Such measurement times are compatible with current and future industrial needs, enabling real-time materials characterization and in-line control of processing variables without disrupting production.

  2. Noncontact conductivity and dielectric measurement for high throughput roll-to-roll nanomanufacturing

    PubMed Central

    Orloff, Nathan D.; Long, Christian J.; Obrzut, Jan; Maillaud, Laurent; Mirri, Francesca; Kole, Thomas P.; McMichael, Robert D.; Pasquali, Matteo; Stranick, Stephan J.; Alexander Liddle, J.

    2015-01-01

    Advances in roll-to-roll processing of graphene and carbon nanotubes have at last led to the continuous production of high-quality coatings and filaments, ushering in a wave of applications for flexible and wearable electronics, woven fabrics, and wires. These applications often require specific electrical properties, and hence precise control over material micro- and nanostructure. While such control can be achieved, in principle, by closed-loop processing methods, there are relatively few noncontact and nondestructive options for quantifying the electrical properties of materials on a moving web at the speed required in modern nanomanufacturing. Here, we demonstrate a noncontact microwave method for measuring the dielectric constant and conductivity (or geometry for samples of known dielectric properties) of materials in a millisecond. Such measurement times are compatible with current and future industrial needs, enabling real-time materials characterization and in-line control of processing variables without disrupting production. PMID:26592441

  3. Self-stimulation in the rat: quantitative characteristics of the reward pathway.

    PubMed

    Gallistel, C R

    1978-12-01

    Quantitative characteristics of the neural pathway that carries the reinforcing signal in electrical self-stimulation of the brain were established by finding which combinations of stimulation parameters give the same performance in a runway. The reward for each run was a train of evenly spaced monophasic cathodal pulses from a monopolar electrode. With train duration and pulse frequency held constant, the required current was a hyperbolic function of pulse duration, with chronaxie c approximately 1.5 msec. With pulse duration held constant, the required strength of the train (the charge delivered per second) was a hyperbolic function of train duration, with chronaxie C approximately 500 msec. To a first approximation, the values of c and C were independent of the choice either of train duration and pulse frequency or of pulse duration, respectively. Hence, the current intensity required by any choice of train duration, pulse frequency, and pulse duration dependent on only two basic parameters, c and C, and one quantity, Qi, the required impulse charge. These may reflect, respectively, current integration by directly excited neurons; temporal integration of neural activity by synaptic processes in a neural network; and the peak of the impulse response of the network, assuming that the network has linear dynamics and that the reward depends on the peak of the output of the network.

  4. Recent developments in the surgical management of perianal fistula for Crohn’s disease

    PubMed Central

    Geltzeiler, Cristina B.; Wieghard, Nicole; Tsikitis, Vassiliki L.

    2014-01-01

    Perianal manifestations of Crohn’s disease (CD) are common and, of them, fistulas are the most common. Perianal fistulas can be extremely debilitating for patients and are often very challenging for clinicians to treat. CD perianal fistulas usually require multidisciplinary and multimodality treatment, including both medical and surgical approaches. The majority of patients require multiple surgical interventions. CD patients with perianal fistulas have a high rate of primary non-healing, surgical morbidity, and high recurrence rates. This has led to constant efforts to improve surgical management of this disease process. PMID:25331917

  5. Chapter 3 innovations in the en route care of combat casualties.

    PubMed

    Hatzfeld, Jennifer J; Dukes, Susan; Bridges, Elizabeth

    2014-01-01

    The en route care environment is dynamic and requires constant innovation to ensure appropriate nursing care for combat casualties. Building on experiences in Iraq and Afghanistan, there have been tremendous innovations in the process of transporting patients, including the movement of patients with spinal injuries. Advances have also been made in pain management and noninvasive monitoring, particularly for trauma and surgical patients requiring close monitoring of their hemodynamic and perfusion status. In addition to institutionalizing these innovations, future efforts are needed to eliminate secondary insults to patients with traumatic brain injuries and technologies to provide closed-loop sedation and ventilation.

  6. Double-layer optical fiber coating analysis in MHD flow of an elastico-viscous fluid using wet-on-wet coating process

    NASA Astrophysics Data System (ADS)

    Khan, Zeeshan; Islam, Saeed; Shah, Rehan Ali; Khan, Muhammad Altaf; Bonyah, Ebenezer; Jan, Bilal; Khan, Aurangzeb

    Modern optical fibers require a double-layer coating on the glass fiber in order to provide protection from signal attenuation and mechanical damage. The most important plastic resins used in wires and optical fibers are plastic polyvinyl chloride (PVC) and low and high density polyethylene (LDPE/HDPE), nylon and Polysulfone. One of the most important things which affect the final product after processing is the design of the coating die. In the present study, double-layer optical fiber coating is performed using melt polymer satisfying Oldroyd 8-constant fluid model in a pressure type die with the effect of magneto-hydrodynamic (MHD). Wet-on-wet coating process is applied for double-layer optical fiber coating. The coating process in the coating die is modeled as a simple two-layer Couette flow of two immiscible fluids in an annulus with an assigned pressure gradient. Based on the assumptions of fully developed laminar and MHD flow, the Oldroyd 8-constant model of non-Newtonian fluid of two immiscible resin layers is modeled. The governing nonlinear equations are solved analytically by the new technique of Optimal Homotopy Asymptotic Method (OHAM). The convergence of the series solution is established. The results are also verified by the Adomian Decomposition Method (ADM). The effect of important parameters such as magnetic parameter Mi , the dilatant constant α , the Pseodoplastic constant β , the radii ratio δ , the pressure gradient Ω , the speed of fiber optics V , and the viscosity ratio κ on the velocity profiles, thickness of coated fiber optics, volume flow rate, and shear stress on the fiber optics are investigated. At the end the result of the present work is also compared with the experimental results already available in the literature by taking non-Newtonian parameters tends to zero.

  7. Thermodynamic and energy efficiency analysis of power generation from natural salinity gradients by pressure retarded osmosis.

    PubMed

    Yip, Ngai Yin; Elimelech, Menachem

    2012-05-01

    The Gibbs free energy of mixing dissipated when fresh river water flows into the sea can be harnessed for sustainable power generation. Pressure retarded osmosis (PRO) is one of the methods proposed to generate power from natural salinity gradients. In this study, we carry out a thermodynamic and energy efficiency analysis of PRO work extraction. First, we present a reversible thermodynamic model for PRO and verify that the theoretical maximum extractable work in a reversible PRO process is identical to the Gibbs free energy of mixing. Work extraction in an irreversible constant-pressure PRO process is then examined. We derive an expression for the maximum extractable work in a constant-pressure PRO process and show that it is less than the ideal work (i.e., Gibbs free energy of mixing) due to inefficiencies intrinsic to the process. These inherent inefficiencies are attributed to (i) frictional losses required to overcome hydraulic resistance and drive water permeation and (ii) unutilized energy due to the discontinuation of water permeation when the osmotic pressure difference becomes equal to the applied hydraulic pressure. The highest extractable work in constant-pressure PRO with a seawater draw solution and river water feed solution is 0.75 kWh/m(3) while the free energy of mixing is 0.81 kWh/m(3)-a thermodynamic extraction efficiency of 91.1%. Our analysis further reveals that the operational objective to achieve high power density in a practical PRO process is inconsistent with the goal of maximum energy extraction. This study demonstrates thermodynamic and energetic approaches for PRO and offers insights on actual energy accessible for utilization in PRO power generation through salinity gradients. © 2012 American Chemical Society

  8. Investigation of ion-beam machining methods for replicated x-ray optics

    NASA Technical Reports Server (NTRS)

    Drueding, Thomas W.

    1996-01-01

    The final figuring step in the fabrication of an optical component involves imparting a specified contour onto the surface. This can be expensive and time consuming step. The recent development of ion beam figuring provides a method for performing the figuring process with advantages over standard mechanical methods. Ion figuring has proven effective in figuring large optical components. The process of ion beam figuring removes material by transferring kinetic energy from impinging neutral particles. The process utilizes a Kaufman type ion source, where a plasma is generated in a discharge chamber by controlled electric potentials. Charged grids extract and accelerate ions from the chamber. The accelerated ions form a directional beam. A neutralizer outside the accelerator grids supplies electrons to the positive ion beam. It is necessary to neutralize the beam to prevent charging workpieces and to avoid bending the beam with extraneous electro-magnetic fields. When the directed beam strikes the workpiece, material sputters in a predicable manner. The amount and distribution of material sputtered is a function of the energy of the beam, material of the component, distance from the workpiece, and angle of incidence of the beam. The figuring method described here assumes a constant beam removal, so that the process can be represented by a convolution operation. A fixed beam energy maintains a constant sputtering rate. This temporally and spatially stable beam is held perpendicular to the workpiece at a fixed distance. For non-constant removal, corrections would be required to model the process as a convolution operation. Specific figures (contours) are achieved by rastering the beam over the workpiece at varying velocities. A unique deconvolution is performed, using series-derivative solution developed for the system, to determine these velocities.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drueding, T.W.

    The final figuring step in the fabrication of an optical component involves imparting a specified contour onto the surface. This can be expensive and time consuming step. The recent development of ion beam figuring provides a method for performing the figuring process with advantages over standard mechanical methods. Ion figuring has proven effective in figuring large optical components. The process of ion beam figuring removes material by transferring kinetic energy from impinging neutral particles. The process utilizes a Kaufman type ion source, where a plasma is generated in a discharge chamber by controlled electric potentials. Charged grids extract and acceleratemore » ions from the chamber. The accelerated ions form a directional beam. A neutralizer outside the accelerator grids supplies electrons to the positive ion beam. It is necessary to neutralize the beam to prevent charging workpieces and to avoid bending the beam with extraneous electro-magnetic fields. When the directed beam strikes the workpiece, material sputters in a predicable manner. The amount and distribution of material sputtered is a function of the energy of the beam, material of the component, distance from the workpiece, and angle of incidence of the beam. The figuring method described here assumes a constant beam removal, so that the process can be represented by a convolution operation. A fixed beam energy maintains a constant sputtering rate. This temporally and spatially stable beam is held perpendicular to the workpiece at a fixed distance. For non-constant removal, corrections would be required to model the process as a convolution operation. Specific figures (contours) are achieved by rastering the beam over the workpiece at varying velocities. A unique deconvolution is performed, using series-derivative solution developed for the system, to determine these velocities.« less

  10. Application of dielectric constant measurement in microwave sludge disintegration and wastewater purification processes.

    PubMed

    Kovács, Petra Veszelovszki; Lemmer, Balázs; Keszthelyi-Szabó, Gábor; Hodúr, Cecilia; Beszédes, Sándor

    2018-05-01

    It has been numerously verified that microwave radiation could be advantageous as a pre-treatment for enhanced disintegration of sludge. Very few data related to the dielectric parameters of wastewater of different origins are available; therefore, the objective of our work was to measure the dielectric constant of municipal and meat industrial wastewater during a continuous flow operating microwave process. Determination of the dielectric constant and its change during wastewater and sludge processing make it possible to decide on the applicability of dielectric measurements for detecting the organic matter removal efficiency of wastewater purification process or disintegration degree of sludge. With the measurement of dielectric constant as a function of temperature, total solids (TS) content and microwave specific process parameters regression models were developed. Our results verified that in the case of municipal wastewater sludge, the TS content has a significant effect on the dielectric constant and disintegration degree (DD), as does the temperature. The dielectric constant has a decreasing tendency with increasing temperature for wastewater sludge of low TS content, but an adverse effect was found for samples with high TS and organic matter contents. DD of meat processing wastewater sludge was influenced significantly by the volumetric flow rate and power level, as process parameters of continuously flow microwave pre-treatments. It can be concluded that the disintegration process of food industry sludge can be detected by dielectric constant measurements. From technical purposes the applicability of dielectric measurements was tested in the purification process of municipal wastewater, as well. Determination of dielectric behaviour was a sensitive method to detect the purification degree of municipal wastewater.

  11. Utilisation of chip thickness models in grinding

    NASA Astrophysics Data System (ADS)

    Singleton, Roger

    Grinding is now a well established process utilised for both stock removal and finish applications. Although significant research is performed in this field, grinding still experiences problems with burn and high forces which can lead to poor quality components and damage to equipment. This generally occurs in grinding when the process deviates from its safe working conditions. In milling, chip thickness parameters are utilised to predict and maintain process outputs leading to improved control of the process. This thesis looks to further the knowledge of the relationship between chip thickness and the grinding process outputs to provide an increased predictive and maintenance modelling capability. Machining trials were undertaken using different chip thickness parameters to understand how these affect the process outputs. The chip thickness parameters were maintained at different grinding wheel diameters for a constant productivity process to determine the impact of chip thickness at a constant material removal rate.. Additional testing using a modified pin on disc test rig was performed to provide further information on process variables. The different chip thickness parameters provide control of different process outputs in the grinding process. These relationships can be described using contact layer theory and heat flux partitioning. The contact layer is defined as the immediate layer beneath the contact arc at the wheel workpiece interface. The size of the layer governs the force experienced during the process. The rate of contact layer removal directly impacts the net power required from the system. It was also found that the specific grinding energy of a process is more dependent on the productivity of a grinding process rather than the value of chip thickness. Changes in chip thickness at constant material removal rate result in microscale changes in the rate of contact layer removal when compared to changes in process productivity. This is a significant piece of information in relation to specific grinding energy where conventional theory states it is primarily dependent on chip thickness..

  12. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  13. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Study of influence of 2.4 GHz electromagnetic waves on electrophysical properties of coniferous trees wood

    NASA Astrophysics Data System (ADS)

    Abdurahimov, Nursulton; Lagunov, Alexey; Melehov, Vladimir

    2017-09-01

    Climate change has a significant impact on changing weather conditions in the Arctic. Wood is a traditional building material in the North of Russia. Supports of communication lines are made of wood. Dry wood is a solid dielectric with a low conductivity. At the same time it is porous material having high hygroscopicity. The presence of moisture leads to wood rotting. To prevent rotting of a support it needs to be impregnated with antiseptics. A tree dried by means of convection drying cannot provide required porosity of wood for impregnation. Our studies of electrophysical properties of coniferous species showed that microwave drying of wood increases the porosity of the wood. Wood dried in this way is easily impregnated with antiseptics. Thorough wood drying requires creating optimal conditions in a microwave oven. During the drying process in a chamber there is a resonant phenomenon. These phenomena depend on electro-physical properties of the material placed in the chamber. Dielectric constant of wood has the most influence. A resonator method to determine the dielectric constant of the wood was used. The values of permittivity for the spruce and pine samples were determined. The measured value of the dielectric constant of wood was used to provide optimal matching of the generator with the resonator in a wood-drying resonator type microwave chamber, and to maintain it in the process of wood drying. It resulted in obtaining the samples with a higher permeability of wood in radial and longitudinal direction. This creates favorable conditions for wood impregnation with antiseptics and flame retardants. Timber dried by means of electromagnetic waves in the 2.4 GHz band has a deeper protective layer. The support made of such wood will serve longer as supports of communication lines.

  15. The Planck-Balance—using a fixed value of the Planck constant to calibrate E1/E2-weights

    NASA Astrophysics Data System (ADS)

    Rothleitner, C.; Schleichert, J.; Rogge, N.; Günther, L.; Vasilyan, S.; Hilbrunner, F.; Knopf, D.; Fröhlich, T.; Härtig, F.

    2018-07-01

    A balance is proposed, which allows the calibration of weights in a continuous range from 1 mg to 1 kg using a fixed value of the Planck constant, h. This so-called Planck-Balance (PB) uses the physical approach of Kibble balances that allow the Planck constant to be derived from the mass. Using the PB no calibrated mass standards are required during weighing processes any longer, because all measurements are traceable via the electrical quantities to the Planck constant, and to the meter and the second. This allows a new approach of balance types after the expected redefinition of the SI-units by the end of 2018. In contrast to many scientific oriented developments, the PB is focused on robust and daily use. Therefore, two balances will be developed, PB2 and PB1, which will allow relative measurement uncertainties comparable to the accuracies of class E2 and E1 weights, respectively, as specified in OIML R 111-1. The balances will be developed in a cooperation of the Physikalisch-Technische Bundesanstalt (PTB) and the Technische Universität Ilmenau in a project funded by the German Federal Ministry of Education and Research.

  16. Constant-pH molecular dynamics using stochastic titration

    NASA Astrophysics Data System (ADS)

    Baptista, António M.; Teixeira, Vitor H.; Soares, Cláudio M.

    2002-09-01

    A new method is proposed for performing constant-pH molecular dynamics (MD) simulations, that is, MD simulations where pH is one of the external thermodynamic parameters, like the temperature or the pressure. The protonation state of each titrable site in the solute is allowed to change during a molecular mechanics (MM) MD simulation, the new states being obtained from a combination of continuum electrostatics (CE) calculations and Monte Carlo (MC) simulation of protonation equilibrium. The coupling between the MM/MD and CE/MC algorithms is done in a way that ensures a proper Markov chain, sampling from the intended semigrand canonical distribution. This stochastic titration method is applied to succinic acid, aimed at illustrating the method and examining the choice of its adjustable parameters. The complete titration of succinic acid, using constant-pH MD simulations at different pH values, gives a clear picture of the coupling between the trans/gauche isomerization and the protonation process, making it possible to reconcile some apparently contradictory results of previous studies. The present constant-pH MD method is shown to require a moderate increase of computational cost when compared to the usual MD method.

  17. Determination of association constants at moderately fast chemical exchange: complexation of camphor enantiomers by alpha-cyclodextrin.

    PubMed

    Bernatowicz, Piotr; Nowakowski, Michał; Dodziuk, Helena; Ejchart, Andrzej

    2006-08-01

    Association constants in weak molecular complexes can be determined by analysis of chemical shifts variations resulting from changes of guest to host concentration ratio. In the regime of very fast exchange, i.e., when exchange rate is several orders of magnitude larger than the Larmor angular frequency difference of the observed resonance in free and complexed molecule, the apparent position of averaged resonance is a population-weighted mean of resonances of particular forms involved in the equilibrium. The assumption of very fast exchange is often, however, tacitly admitted in literature even in cases where the process of interest is much slower than required. We show that such an unjustified simplification may, under certain circumstances, lead to significant underestimation of association constant and, in consequence, to non-negligible errors in Gibbs free energy under determination. We present a general method, based on iterative numerical NMR line shape analysis, which allows one for the compensation of chemical exchange effects, and delivers both the correct association constants and the exchange rates. The latter are not delivered by the other mentioned method. Practical application of our algorithm is illustrated by the case of camphor-alpha-cyclodextrin complexes.

  18. Evolution of flexural rigidity according to the cross-sectional dimension of a superelastic nickel titanium orthodontic wire.

    PubMed

    Garrec, Pascal; Tavernier, Bruno; Jordan, Laurence

    2005-08-01

    The choice of the most suitable orthodontic wire for each stage of treatment requires estimation of the forces generated. In theory, the selection of wire sequences should initially utilize a lower flexural rigidity; thus clinicians use smaller round cross-sectional dimension wires to generate lighter forces during the preliminary alignment stage. This assessment is true for conventional alloys, but not necessarily for superelastic nickel titanium (NiTi). In this case, the flexural rigidity dependence on cross-sectional dimension differs from the linear elasticity prediction because of the martensitic transformation process. It decreases with increasing deflection and this phenomenon is accentuated in the unloading process. This behaviour should lead us to consider differently the biomechanical approach to orthodontic treatment. The present study compared bending in 10 archwires made from NiTi orthodontics alloy of two cross-sectional dimensions. The results were based on microstructural and mechanical investigations. With conventional alloys, the flexural rigidity was constant for each wire and increased largely with the cross-sectional dimension for the same strain. With NiTi alloys, the flexural rigidity is not constant and the influence of size was not as important as it should be. This result can be explained by the non-constant elastic modulus during the martensite transformation process. Thus, in some cases, treatment can begin with full-size (rectangular) wires that nearly fill the bracket slot with a force application deemed to be physiologically desirable for tooth movement and compatible with patient comfort.

  19. Hydrological models as web services: Experiences from the Environmental Virtual Observatory project

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Vitolo, C.; Reaney, S. M.; Beven, K.

    2012-12-01

    Data availability in environmental sciences is expanding at a rapid pace. From the constant stream of high-resolution satellite images to the local efforts of citizen scientists, there is an increasing need to process the growing stream of heterogeneous data and turn it into useful information for decision-making. Environmental models, ranging from simple rainfall - runoff relations to complex climate models, can be very useful tools to process data, identify patterns, and help predict the potential impact of management scenarios. Recent technological innovations in networking, computing and standardization may bring a new generation of interactive models plugged into virtual environments closer to the end-user. They are the driver of major funding initiatives such as the UK's Virtual Observatory program, and the U.S. National Science Foundation's Earth Cube. In this study we explore how hydrological models, being an important subset of environmental models, have to be adapted in order to function within a broader environment of web-services and user interactions. Historically, hydrological models have been developed for very different purposes. Typically they have a rigid model structure, requiring a very specific set of input data and parameters. As such, the process of implementing a model for a specific catchment requires careful collection and preparation of the input data, extensive calibration and subsequent validation. This procedure seems incompatible with a web-environment, where data availability is highly variable, heterogeneous and constantly changing in time, and where the requirements of end-users may be not necessarily align with the original intention of the model developer. We present prototypes of models that are web-enabled using the web standards of the Open Geospatial Consortium, and implemented in online decision-support systems. We identify issues related to (1) optimal use of available data; (2) the need for flexible and adaptive structures; (3) quantification and communication of uncertainties. Lastly, we present some road maps to address these issues and discuss them in the broader context of web-based data processing and "big data" science.

  20. Network Security Validation Using Game Theory

    NASA Astrophysics Data System (ADS)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  1. Convergence of the Quasi-static Antenna Design Algorithm

    DTIC Science & Technology

    2013-04-01

    conductor is the same as an equipotential surface . A line of constant charge on the z-axis, with an image, will generate the ACD antenna design...satisfies this boundary condition. The multipole moments have negative potentials, which can cause the equipotential surface to terminate on the disk or...feed wire. This requires an addition step in the solution process; the equipotential surface is sampled to verify that the charge is enclosed by the

  2. Induction of anaerobic, photoautotrophic growth in the cyanobacterium Oscillatoria limnetica.

    PubMed Central

    Oren, A; Padan, E

    1978-01-01

    Anaerobic photoautotrophic growth of the cyanobacterium Oscillatoria limnetica was demonstrated under nitrogen in the presence of 3-(3,4-dichlorophenyl)-1,1-dimethylurea (5micron), a constant concentration of Na2S (2.5 mM), and constant pH (7.3). The photoanaerobic growth rate (2 days doubling time) was similar to that obtained under oxygenic photoautotrophic growth conditions. The potential of oxygenic photosynthesis is constitutive in the cells; that of anoxygenic photosynthesis is rapidly (2 h) induced in the presence of Na2S in the light in a process requiring protein synthesis. The facultative anaerobic phototrophic growth physiology exhibited by O. limnetica would seem to represent an intermediate physiological pattern between the obligate anaerobic one of photosynthetic bacteria and the oxygenic one of eucaryotic algae. PMID:415043

  3. (In)validity of the constant field and constant currents assumptions in theories of ion transport.

    PubMed Central

    Syganow, A; von Kitzing, E

    1999-01-01

    Constant electric fields and constant ion currents are often considered in theories of ion transport. Therefore, it is important to understand the validity of these helpful concepts. The constant field assumption requires that the charge density of permeant ions and flexible polar groups is virtually voltage independent. We present analytic relations that indicate the conditions under which the constant field approximation applies. Barrier models are frequently fitted to experimental current-voltage curves to describe ion transport. These models are based on three fundamental characteristics: a constant electric field, negligible concerted motions of ions inside the channel (an ion can enter only an empty site), and concentration-independent energy profiles. An analysis of those fundamental assumptions of barrier models shows that those approximations require large barriers because the electrostatic interaction is strong and has a long range. In the constant currents assumption, the current of each permeating ion species is considered to be constant throughout the channel; thus ion pairing is explicitly ignored. In inhomogeneous steady-state systems, the association rate constant determines the strength of ion pairing. Among permeable ions, however, the ion association rate constants are not small, according to modern diffusion-limited reaction rate theories. A mathematical formulation of a constant currents condition indicates that ion pairing very likely has an effect but does not dominate ion transport. PMID:9929480

  4. Acid-base regulation during heating and cooling in the lizard, Varanus exanthematicus.

    PubMed

    Wood, S C; Johansen, K; Glass, M L; Hoyt, R W

    1981-04-01

    Current concepts of acid-base balance in ectothermic animals require that arterial pH vary inversely with body temperature in order to maintain a constant OH-/H+ and constant net charge on proteins. The present study evaluates acid-base regulation in Varanus exanthematicus under various regimes of heating and cooling between 15 and 38 degrees C. Arterial blood was sampled during heating and cooling at various rates, using restrained and unrestrained animals with and without face masks. Arterial pH was found to have a small temperature dependence, i.e., pH = 7.66--0.005 (T). The slope (dpH/dT = -0.005), while significantly greater than zero (P less than 0.05), is much less than that required for a constant OH-/H+ or a constant imidazole alphastat (dpH/dT congruent to 0.018). The physiological mechanism that distinguishes this species from most other ectotherms is the presence of a ventilatory response to temperature-induced changes in CO2 production and O2 uptake, i.e., VE/VO2 is constant. This results in a constant O2 extraction and arterial saturation (approx. 90%), which is adaptive to the high aerobic requirements of this species.

  5. Evaluation of infiltration for the determination of palms water needs

    NASA Astrophysics Data System (ADS)

    Benlarbi, Dalila; Boutaoutaou, Djamel; Saggaï, Sofiane

    2018-05-01

    In arid climate conditions, irrigation water requirements increase, but available water resources are limited. And therefore the Saharan regions, large consumers of water can be seriously threatened if they do not make the necessary to become as parsimonious as allow the irrigation techniques whose technological aspect on their improvement has been privileged until now but all the problems are not solved. The objective of this work is to know the process of infiltration of water in the soil, i.e.: to try to determine exactly its value with obtaining the best combination (flow of entry, board length and irrigation time) in order to have a more or less uniform distribution in the soil and especially by avoiding significant water losses that would cause rise in the water table. The infiltration will allow us to calculate at any point the dose of water received that we will compare with the needs of the date palm. For this purpose; we varied the input flow for a constant board length. Then we varied the board length for a constant input rate. In both cases we varied the irrigation time according to the water requirements of the date palm. The flow remains of course constant during the entire feeding period. This study is primarily experimental and aims to meet practical applications but not immediately because it is necessary to continue the experiments with several other combinations to achieve practical results.

  6. Dissolution process analysis using model-free Noyes-Whitney integral equation.

    PubMed

    Hattori, Yusuke; Haruna, Yoshimasa; Otsuka, Makoto

    2013-02-01

    Drug dissolution process of solid dosages is theoretically described by Noyes-Whitney-Nernst equation. However, the analysis of the process is demonstrated assuming some models. Normally, the model-dependent methods are idealized and require some limitations. In this study, Noyes-Whitney integral equation was proposed and applied to represent the drug dissolution profiles of a solid formulation via the non-linear least squares (NLLS) method. The integral equation is a model-free formula involving the dissolution rate constant as a parameter. In the present study, several solid formulations were prepared via changing the blending time of magnesium stearate (MgSt) with theophylline monohydrate, α-lactose monohydrate, and crystalline cellulose. The formula could excellently represent the dissolution profile, and thereby the rate constant and specific surface area could be obtained by NLLS method. Since the long time blending coated the particle surface with MgSt, it was found that the water permeation was disturbed by its layer dissociating into disintegrant particles. In the end, the solid formulations were not disintegrated; however, the specific surface area gradually increased during the process of dissolution. The X-ray CT observation supported this result and demonstrated that the rough surface was dominant as compared to dissolution, and thus, specific surface area of the solid formulation gradually increased. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  8. Flow cytometry as a method for the evaluation of raw material, product and process in the dairy industry.

    PubMed

    Ruszczyńska, A; Szteyn, J; Wiszniewska-Laszczych, A

    2007-01-01

    Producing dairy products which are safe for consumers requires the constant monitoring of the microbiological quality of raw material, the production process itself and the end product. Traditional methods, still a "gold standard", require a specialized laboratory working on recognized and validated methods. Obtaining results is time- and labor-consuming and do not allow rapid evaluation. Hence, there is a need for a rapid, precise method enabling the real-time monitoring of microbiological quality, and flow cytometry serves this function well. It is based on labeling cells suspended in a solution with fluorescent dyes and pumping them into a measurement zone where they are exposed to a precisely focused laser beam. This paper is aimed at presenting the possibilities of applying flow cytometry in the dairy industry.

  9. Storage peak gas-turbine power unit

    NASA Technical Reports Server (NTRS)

    Tsinkotski, B.

    1980-01-01

    A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.

  10. Lidar Luminance Quantizer

    NASA Technical Reports Server (NTRS)

    Quilligan, Gerard; DeMonthier, Jeffrey; Suarez, George

    2011-01-01

    This innovation addresses challenges in lidar imaging, particularly with the detection scheme and the shapes of the detected signals. Ideally, the echoed pulse widths should be extremely narrow to resolve fine detail at high event rates. However, narrow pulses require wideband detection circuitry with increased power dissipation to minimize thermal noise. Filtering is also required to shape each received signal into a form suitable for processing by a constant fraction discriminator (CFD) followed by a time-to-digital converter (TDC). As the intervals between the echoes decrease, the finite bandwidth of the shaping circuits blends the pulses into an analog signal (luminance) with multiple modes, reducing the ability of the CFD to discriminate individual events

  11. Class Projects in Physical Organic Chemistry: The Hydrolysis of Aspirin

    ERIC Educational Resources Information Center

    Marrs, Peter S.

    2004-01-01

    An exercise that provides a hands-on demonstration of the hydrolysis of aspirin is presented. The key to understanding the hydrolysis is recognizing that all six process may occur simultaneously and that the observed rate constant is the sum of the rate constants that one rate constant dominates the overall process.

  12. Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis.

    PubMed

    Rodd, Jennifer M; Vitello, Sylvia; Woollams, Anna M; Adank, Patti

    2015-02-01

    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  13. Mechanistic modeling of modular co-rotating twin-screw extruders.

    PubMed

    Eitzlmayr, Andreas; Koscher, Gerold; Reynolds, Gavin; Huang, Zhenyu; Booth, Jonathan; Shering, Philip; Khinast, Johannes

    2014-10-20

    In this study, we present a one-dimensional (1D) model of the metering zone of a modular, co-rotating twin-screw extruder for pharmaceutical hot melt extrusion (HME). The model accounts for filling ratio, pressure, melt temperature in screw channels and gaps, driving power, torque and the residence time distribution (RTD). It requires two empirical parameters for each screw element to be determined experimentally or numerically using computational fluid dynamics (CFD). The required Nusselt correlation for the heat transfer to the barrel was determined from experimental data. We present results for a fluid with a constant viscosity in comparison to literature data obtained from CFD simulations. Moreover, we show how to incorporate the rheology of a typical, non-Newtonian polymer melt, and present results in comparison to measurements. For both cases, we achieved excellent agreement. Furthermore, we present results for the RTD, based on experimental data from the literature, and found good agreement with simulations, in which the entire HME process was approximated with the metering model, assuming a constant viscosity for the polymer melt. Copyright © 2014. Published by Elsevier B.V.

  14. Purely temporal figure-ground segregation.

    PubMed

    Kandil, F I; Fahle, M

    2001-05-01

    Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.

  15. The Kalman Filter Applied to Process Range Data of the Cubic Model 40 Autotape System

    DTIC Science & Technology

    1976-12-01

    the unit to be tracked, two responders operated at two different shore sites and the associated antenna/RF assemblies. Required support systems include...where the range arcs are orthogonal. Figure 9 diagrams error contours which are actually the locii of constant MPE for two particular responder sites ...ranges simultaneously, once per second, the ranges being those between the Interrogator and each of the responders . The ranges are computed from the

  16. Mechanism and Kinetics of Li2S Precipitation in Lithium-Sulfur Batteries.

    PubMed

    Fan, Frank Y; Carter, W Craig; Chiang, Yet-Ming

    2015-09-16

    The kinetics of Li2 S electrodeposition onto carbon in lithium-sulfur batteries are characterized. Electrodeposition is found to be dominated by a 2D nucleation and growth process with rate constants that depend strongly on the electrolyte solvent. Nucleation is found to require a greater overpotential than growth, which results in a morphology that is dependent on the discharge rate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. PHYSIOLOGICAL DYNAMICS OF SIPHONOPHORES FROM DEEP SCATTERING LAYERS.

    DTIC Science & Technology

    Effects of siphonophores on sound propagation in the sea were studied by determining the size of gas bubbles they contain and produce, and the times...volumes, and rates involved in these processes. Major findings were: (1) gases contained in fresh siphonophore floats are generally close to...constants for siphonophore floats are close to those for chitin; (4) calculated energy requirements for countering hydrostatic pressures indicate that float refilling times are probably no more than a few hours. (Author)

  18. Ultrasound melted polymer sleeve for improved screw anchorage in trabecular bone--A novel screw augmentation technique.

    PubMed

    Schmoelz, W; Mayr, R; Schlottig, F; Ivanovic, N; Hörmann, R; Goldhahn, J

    2016-03-01

    Screw anchorage in osteoporotic bone is still limited and makes treatment of osteoporotic fractures challenging for surgeons. Conventional screws fail in poor bone quality due to loosening at the screw-bone interface. A new technology should help to improve this interface. In a novel constant amelioration process technique, a polymer sleeve is melted by ultrasound in the predrilled screw hole prior to screw insertion. The purpose of this study was to investigate in vitro the effect of the constant amelioration process platform technology on primary screw anchorage. Fresh frozen femoral heads (n=6) and vertebrae (n=6) were used to measure the maximum screw insertion torque of reference and constant amelioration process augmented screws. Specimens were cut in cranio-caudal direction, and the screws (reference and constant amelioration process) were implanted in predrilled holes in the trabecular structure on both sides of the cross section. This allowed the pairwise comparison of insertion torque for constant amelioration process and reference screws (femoral heads n=18, vertebrae n=12). Prior to screw insertion, a micro-CT scan was made to ensure comparable bone quality at the screw placement location. The mean insertion torque for the constant amelioration process augmented screws in both, the femoral heads (44.2 Ncm, SD 14.7) and the vertebral bodies (13.5 Ncm, SD 6.3) was significantly higher than for the reference screws of the femoral heads (31.7 Ncm, SD 9.6, p<0.001) and the vertebral bodies (7.1 Ncm, SD 4.5, p<0.001). The interconnection of the melted polymer sleeve with the surrounding trabecular bone in the constant amelioration process technique resulted in a higher screw insertion torque and can improve screw anchorage in osteoporotic trabecular bone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Stress/strain changes and triggered seismicity at The Geysers, California

    USGS Publications Warehouse

    Gomberg, J.; Davis, S.

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  20. Stress/strain changes and triggered seismicity at The Geysers, California

    NASA Astrophysics Data System (ADS)

    Gomberg, Joan; Davis, Scott

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  1. Adaptation of vestibular signals for self-motion perception

    PubMed Central

    St George, Rebecca J; Day, Brian L; Fitzpatrick, Richard C

    2011-01-01

    A fundamental concern of the brain is to establish the spatial relationship between self and the world to allow purposeful action. Response adaptation to unvarying sensory stimuli is a common feature of neural processing, both peripherally and centrally. For the semicircular canals, peripheral adaptation of the canal-cupula system to constant angular-velocity stimuli dominates the picture and masks central adaptation. Here we ask whether galvanic vestibular stimulation circumvents peripheral adaptation and, if so, does it reveal central adaptive processes. Transmastoidal bipolar galvanic stimulation and platform rotation (20 deg s−1) were applied separately and held constant for 2 min while perceived rotation was measured by verbal report. During real rotation, the perception of turn decayed from the onset of constant velocity with a mean time constant of 15.8 s. During galvanic-evoked virtual rotation, the perception of rotation initially rose but then declined towards zero over a period of ∼100 s. For both stimuli, oppositely directed perceptions of similar amplitude were reported when stimulation ceased indicating signal adaptation at some level. From these data the time constants of three independent processes were estimated: (i) the peripheral canal-cupula adaptation with time constant 7.3 s, (ii) the central ‘velocity-storage’ process that extends the afferent signal with time constant 7.7 s, and (iii) a long-term adaptation with time constant 75.9 s. The first two agree with previous data based on constant-velocity stimuli. The third component decayed with the profile of a real constant angular acceleration stimulus, showing that the galvanic stimulus signal bypasses the peripheral transformation so that the brainstem sees the galvanic signal as angular acceleration. An adaptive process involving both peripheral and central processes is indicated. Signals evoked by most natural movements will decay peripherally before adaptation can exert an appreciable effect, making a specific vestibular behavioural role unlikely. This adaptation appears to be a general property of the internal coding of self-motion that receives information from multiple sensory sources and filters out the unvarying components regardless of their origin. In this instance of a pure vestibular sensation, it defines the afferent signal that represents the stationary or zero-rotation state. PMID:20937715

  2. Traffic dynamics of carnival processions

    NASA Astrophysics Data System (ADS)

    Polichronidis, Petros; Wegerle, Dominik; Dieper, Alexander; Schreckenberg, Michael

    2018-03-01

    The traffic dynamics of processions are described in this study. GPS data from participating groups in the Cologne Rose Monday processions 2014–2017 are used to analyze the kinematic characteristics. The preparation of the measured data requires an adjustment by a specially adapted algorithm for the map matching method. A higher average velocity is observed for the last participant, the Carnival Prince, than for the leading participant of the parade. Based on the results of the data analysis, for the first time a model can be established for defilading parade groups as a modified Nagel-Schreckenberg model. This model can reproduce the observed characteristics in simulations. They can be explained partly by the constantly moving vehicle driving ahead of the parade leaving the pathway and partly due to a spatial contraction of the parade during the procession.

  3. [The effects of various factors on the in vitro velocity of drug release from repository tablets. Part 4: Isoniazid (Rimicid) respository tablets (author's transl)].

    PubMed

    Tomassini, L; Michailova, D; Naplatanova, D; Slavtschev, P

    1979-12-01

    The authors investigated the release of isoniazid from repository tablets as related to form, processing technology, strength constant and storage for 5 years. On determining the diffusion coefficient (D), the initial dissolution rate (Vo) and the time required for the diffusion of the releasing medium to the middle of the tablet (t1/2), it was found that the difference in release rate between the flat and the biconvex tablets is small. Furthermore, it was stated that the three-layer tablets have very high D and Vo values and very low t1/2 values, for what reason they are unsuited for repository tablets of the composition under investigation. Moreover, it was found that an increase of the strength constant does not affect the D, t1/2 and Vo values, and that the release of isoniazid is retarded only in flat tablets with the highest strength constant. Storage exerts no effect on the drug release from these tablets. The industrial production of these tablets is under way.

  4. Self-ordered, controlled structure nanoporous membranes using constant current anodization.

    PubMed

    Lee, Kwan; Tang, Yun; Ouyang, Min

    2008-12-01

    We report a constant current (CC) based anodization technique to fabricate and control structure of mechanically stable anodic aluminum oxide (AAO) membranes with a long-range ordered hexagonal nanopore pattern. For the first time we show that interpore distance (Dint) of a self-ordered nanopore feature can be continuously tuned over a broad range with CC anodization and is uniquely defined by the conductivity of sulfuric acid as electrolyte. We further demonstrate that this technique can offer new degrees of freedom for engineering planar nanopore structures by fine tailoring the CC based anodization process. Our results not only facilitate further understanding of self-ordering mechanism of alumina membranes but also provide a fast, simple (without requirement of prepatterning or preoxide layer), and flexible methodology for controlling complex nanoporous structures, thus offering promising practical applications in nanotechnology.

  5. A Cocatalytic Effect between Meldrum's Acid and Benzoxazine Compounds in Preparation of High Performance Thermosetting Resins.

    PubMed

    Chen, Yi; Lin, Liang-Kai; Chiang, Shu-Jen; Liu, Ying-Ling

    2017-02-01

    In this work, a cocatalytic effect between Meldrum's acid (MA) and benzoxazine (Bz) compounds has been explored to build up a self-promoting curing system. Consequently, the MA/Bz reactive blend exhibits a relatively low reaction temperature compared to the required temperatures for the cross-linking reactions of the pure MA and Bz components. This feature is attractive for energy-saving processing issues. Moreover, the thermosetting resins based on the MA/Bz reactive blends have been prepared. The MA component can generate additional free volume in the resulting resins, so as to trap air in the resin matrix and consequently to bring low dielectric constants to the resins. The MA-containing agent is an effective modifier for benzoxazine resins to reduce their dielectric constants. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Relaxation of vacuum energy in q-theory

    NASA Astrophysics Data System (ADS)

    Klinkhamer, F. R.; Savelainen, M.; Volovik, G. E.

    2017-08-01

    The q-theory formalism aims to describe the thermodynamics and dynamics of the deep quantum vacuum. The thermodynamics leads to an exact cancellation of the quantum-field zero-point-energies in equilibrium, which partly solves the main cosmological constant problem. But, with reversible dynamics, the spatially flat Friedmann-Robertson-Walker universe asymptotically approaches the Minkowski vacuum only if the Big Bang already started out in an initial equilibrium state. Here, we extend q-theory by introducing dissipation from irreversible processes. Neglecting the possible instability of a de-Sitter vacuum, we obtain different scenarios with either a de-Sitter asymptote or collapse to a final singularity. The Minkowski asymptote still requires fine-tuning of the initial conditions. This suggests that, within the q-theory approach, the decay of the de-Sitter vacuum is a necessary condition for the dynamical solution of the cosmological constant problem.

  7. Agile Methods for Open Source Safety-Critical Software

    PubMed Central

    Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-01-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545

  8. Agile Methods for Open Source Safety-Critical Software.

    PubMed

    Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-08-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.

  9. Cooperation between humans and robots in fine assembly

    NASA Astrophysics Data System (ADS)

    Jalba, C. K.; Konold, P.; Rapp, I.; Mann, C.; Muminovic, A.

    2017-01-01

    The development of ever smaller components in manufacturing processes require handling, assembling and testing of miniature similar components. The human eye meets its optical limits with ongoing miniaturization of parts, due to the fact that it is not able to detect particles with a size smaller than 0.11 mm or register distances below 0.07 mm - like separating gaps. After several hours of labour, workers cannot accurately differentiate colour nuances as well as constant quality of work cannot be guaranteed. Assembly is usually done with tools, such as microscopes, magnifiers or digital measuring devices. Due to the enormous mental concentration, quickly a fatigue process sets in. This requires breaks or change of task and reduces productivity. Dealing with handling devices such as grippers, guide units and actuators for component assembling, requires a time consuming training process. Often productivity increase is first achieved after years of daily training. Miniaturizations are ubiquitously needed, for instance in the surgery. Very small add-on instruments must be provided. In measurement, e.g. it is a technological must and a competitive advantage, to determine required data with a small-as-possible, highest-possible-resolution sensor. Solution: The realization of a flexible universal workstation, using standard robotic systems and image processing devices in cooperation with humans, where workers are largely freed up from highly strenuous physical and fine motoric work, so that they can do productive work monitoring and adjusting the machine assisted production process.

  10. A MATLAB Library for Rapid Prototyping of Wireless Communications Algorithms with the Universal Software Radio Peripheral (USRP) Radio Family

    DTIC Science & Technology

    2013-06-01

    Radio is a software development toolkit that provides signal processing blocks to drive the SDR. GNU Radio has many strong points – it is actively...maintained with a large user base, new capabilities are constantly being added, and compiled C code is fast for many real-time applications such as...programming interface (API) makes learning the architecture a daunting task, even for the experienced software developer. This requirement poses many

  11. Satellite Capabilities Mapping - Utilizing Small Satellites

    DTIC Science & Technology

    2010-09-01

    Metrics Definition…………………………..50 Figure 19. System and Requirements Decomposition…………………………………...59 Figure 20. TPS Fuctional Mapping Process...offered by small satellites. “The primary force in our corner of the universe is our sun. The sun is constantly radiating enormous amounts of...weather prediction models, a primary tool for forecasting weather” [19]. The NPOESS was a tri-agency program intended to develop and operate the next

  12. Development of a prototype real-time automated filter for operational deep space navigation

    NASA Technical Reports Server (NTRS)

    Masters, W. C.; Pollmeier, V. M.

    1994-01-01

    Operational deep space navigation has been in the past, and is currently, performed using systems whose architecture requires constant human supervision and intervention. A prototype for a system which allows relatively automated processing of radio metric data received in near real-time from NASA's Deep Space Network (DSN) without any redesign of the existing operational data flow has been developed. This system can allow for more rapid response as well as much reduced staffing to support mission navigation operations.

  13. Gas evolution from spheres

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1991-04-01

    Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.

  14. Processing techniques for correlation of LDA and thermocouple signals

    NASA Astrophysics Data System (ADS)

    Nina, M. N. R.; Pita, G. P. A.

    1986-11-01

    A technique was developed to enable the evaluation of the correlation between velocity and temperature, with laser Doppler anemometer (LDA) as the source of velocity signals and fine wire thermocouple as that of flow temperature. The discontinuous nature of LDA signals requires a special technique for correlation, in particular when few seeding particles are present in the flow. The thermocouple signal was analog compensated in frequency and the effect of the value of time constant on the velocity temperature correlation was studied.

  15. On the use of distributed sensing in control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave

    1990-01-01

    Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.

  16. An ethical framework for the responsible leadership of accountable care organizations.

    PubMed

    McCullough, Laurence B

    2012-01-01

    Using the ethical concepts of co-fiduciary responsibility in patient care and of preventive ethics, this article provides an ethical framework to guide physician and lay leaders of accountable care organizations. The concept of co-fiduciary responsibility is based on the ethical concept of medicine as a profession, which was introduced into the history of medical ethics in the 18th century. Co-fiduciary responsibility applies to everyone who influences the processes of patient care: physicians, organizational leaders, patients, and patients' surrogates. A preventive ethics approach to co-fiduciary responsibility requires leaders of accountable care organizations to create organizational cultures of fiduciary professionalism that implement and support the following: improving quality based on candor and accountability, reasserting the physician's professional role in the informed consent process, and constraining patients' and surrogates' autonomy. Sustainable organizational cultures of fiduciary professionalism will require commitment of organizational resources and constant vigilance over the intellectual and moral integrity of organizational culture.

  17. Comparison of effectiveness of convection-, transpiration-, and film-cooling methods with air as coolant

    NASA Technical Reports Server (NTRS)

    Eckert, E R G; Livingood, N B

    1954-01-01

    Various parts of aircraft propulsion engines that are in contact with hot gases often require cooling. Transpiration and film cooling, new methods that supposedly utilize cooling air more effectively than conventional convection cooling, have already been proposed. This report presents material necessary for a comparison of the cooling requirements of these three methods. Correlations that are regarded by the authors as the most reliable today are employed in evaluating each of the cooling processes. Calculations for the special case in which the gas velocity is constant along the cooled wall (flat plate) are presented. The calculations reveal that a comparison of the three cooling processes can be made on quite a general basis. The superiority of transpiration cooling is clearly shown for both laminar and turbulent flow. This superiority is reduced when the effects of radiation are included; for gas-turbine blades, however, there is evidence indicating that radiation may be neglected.

  18. plasticity of TGF-β signaling

    PubMed Central

    2011-01-01

    Background The family of TGF-β ligands is large and its members are involved in many different signaling processes. These signaling processes strongly differ in type with TGF-β ligands eliciting both sustained or transient responses. Members of the TGF-β family can also act as morphogen and cellular responses would then be expected to provide a direct read-out of the extracellular ligand concentration. A number of different models have been proposed to reconcile these different behaviours. We were interested to define the set of minimal modifications that are required to change the type of signal processing in the TGF-β signaling network. Results To define the key aspects for signaling plasticity we focused on the core of the TGF-β signaling network. With the help of a parameter screen we identified ranges of kinetic parameters and protein concentrations that give rise to transient, sustained, or oscillatory responses to constant stimuli, as well as those parameter ranges that enable a proportional response to time-varying ligand concentrations (as expected in the read-out of morphogens). A combination of a strong negative feedback and fast shuttling to the nucleus biases signaling to a transient rather than a sustained response, while oscillations were obtained if ligand binding to the receptor is weak and the turn-over of the I-Smad is fast. A proportional read-out required inefficient receptor activation in addition to a low affinity of receptor-ligand binding. We find that targeted modification of single parameters suffices to alter the response type. The intensity of a constant signal (i.e. the ligand concentration), on the other hand, affected only the strength but not the type of the response. Conclusions The architecture of the TGF-β pathway enables the observed signaling plasticity. The observed range of signaling outputs to TGF-β ligand in different cell types and under different conditions can be explained with differences in cellular protein concentrations and with changes in effective rate constants due to cross-talk with other signaling pathways. It will be interesting to uncover the exact cellular differences as well as the details of the cross-talks in future work. PMID:22051045

  19. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  20. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Canali-type channels on Venus - Some genetic constraints

    NASA Technical Reports Server (NTRS)

    Komatsu, Goro; Kargel, Jeffrey S.; Baker, Victor R.

    1992-01-01

    Canali-type channels on Venus are unique because of their great lengths (up to 6800 km) and nearly constant channel cross sectional shapes along their paths. A simple model incorporating channel flow and radiative cooling suggests that common terrestrial-type tholeiite lava cannot sustain a superheated and turbulent state for the long distances required for thermal erosion of canali within allowable discharge rates. If canali formed mainly by constructional processes, laminar tholeiitic flow of relatively high, sustained discharge rates might travel the observed distances, but the absence of levees would need to be explained. An exotic low temperature, low viscosity lava like carbonatite or sulfur seems to be required for the erosional genesis of canali.

  2. DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 4.

    DTIC Science & Technology

    1976-07-01

    Model Constant Data Deck Structure . .. .... IV-13-A-40 Appendix B. Movement Model Program Descriptions . .. .. . .IV-13-B-1 1. Introduction...Data ................ IV-15-A-17 11. Airmobile Constant Data Deck Structure .. ...... .. IV-15-A-30 Appendix B. Airmobile Model Program Descriptions...Make no changes. 12. AIRMOBILE CONSTANT DATA DECK STRUCTURE . The deck structure required by the Airmobile Model constant data load program and the data

  3. Electromechanical conversion efficiency for dielectric elastomer generator in different energy harvesting cycles

    NASA Astrophysics Data System (ADS)

    Cao, Jian-Bo; E, Shi-Ju; Guo, Zhuang; Gao, Zhao; Luo, Han-Pin

    2017-11-01

    In order to improve electromechanical conversion efficiency for dielectric elastomer generators (DEG), on the base of studying DEG energy harvesting cycles of constant voltage, constant charge and constant electric field intensity, a new combined cycle mode and optimization theory in terms of the generating mechanism and electromechanical coupling process have been built. By controlling the switching point to achieve the best energy conversion cycle, the energy loss in the energy conversion process is reduced. DEG generating test bench which was used to carry out comparative experiments has been established. Experimental results show that the collected energy in constant voltage cycle, constant charge cycle and constant electric field intensity energy harvesting cycle decreases in turn. Due to the factors such as internal resistance losses, electrical losses and so on, actual energy values are less than the theoretical values. The electric energy conversion efficiency by combining constant electric field intensity cycle with constant charge cycle is larger than that of constant electric field intensity cycle. The relevant conclusions provide a basis for the further applications of DEG.

  4. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  5. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    PubMed

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®

  6. Optimization and resilience of complex supply-demand networks

    NASA Astrophysics Data System (ADS)

    Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng

    2015-06-01

    Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

  7. A Comparative Study of Cyclic Oxidation and Sulfates-Induced Hot Corrosion Behavior of Arc-Sprayed Ni-Cr-Ti Coatings at Moderate Temperatures

    NASA Astrophysics Data System (ADS)

    Guo, Wenmin; Wu, Yuping; Zhang, Jianfeng; Hong, Sheng; Chen, Liyan; Qin, Yujiao

    2015-06-01

    The cyclic oxidation and sulfates-induced hot corrosion behaviors of a Ni-43Cr-0.3Ti arc-sprayed coating at 550-750 °C were characterized and compared in this study. In general, all the oxidation and hot corrosion kinetic curves of the coating followed a parabolic law, i.e., the weight of the specimens showed a rapid growth initially and then reached the gradual state. However, the initial stage of the hot corrosion process was approximately two times longer than that of the oxidation process, indicating a longer preparation time required for the formation of a protective scale in the former process. At 650 °C, the parabolic rate constant for the hot corrosion was 7.2 × 10-12 g2/(cm4·s), approximately 1.7 times higher than that for the oxidation at the same temperature. The lower parabolic rate constant for the oxidation was mainly attributed to the formation of a protective oxide scale on the surface of corroded specimens, which was composed of a mixture of NiO, Cr2O3, and NiCr2O4. However, as the liquid molten salts emerged during the hot corrosion, these protective oxides would be dissolved and the coating was corrupted acceleratedly.

  8. Quantitative analysis of the thermal requirements for stepwise physical dormancy-break in seeds of the winter annual Geranium carolinianum (Geraniaceae)

    PubMed Central

    Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.

    2013-01-01

    Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728

  9. Effects of Biofuel and Variant Ambient Pressure on FlameDevelopment and Emissions of Gasoline Engine.

    NASA Astrophysics Data System (ADS)

    Hashim, Akasha; Khalid, Amir; Sapit, Azwan; Samsudin, Dahrum

    2016-11-01

    There are many technologies about exhaust emissions reduction for wide variety of spark ignition (SI) engine have been considered as the improvement throughout the combustion process. The stricter on legislation of emission and demands of lower fuel consumption needs to be priority in order to satisfy the demand of emission quality. Besides, alternative fuel such as methanol-gasoline blends is used as working fluid in this study due to its higher octane number and self-sustain concept which capable to contribute positive effect to the combustion process. The purpose of this study is to investigate the effects of methanol-gasoline fuel with different blending ratio and variant ambient pressures on flame development and emission for gasoline engine. An experimental study is carried towards to the flame development of methanol-gasoline fuel in a constant volume chamber. Schlieren optical visualization technique is a visual process that used when high sensitivity is required to photograph the flow of fluids of varying density used for captured the combustion images in the constant volume chamber and analysed through image processing technique. Apart from that, the result showed combustion burn rate increased when the percentage of methanol content in gasoline increased. Thus, high percentage of methanol-gasoline blends gave greater flame development area. Moreover, the emissions of CO, NOX and HC are performed a reduction when the percentage of methanol content in gasoline is increased. Contrarily, the emission of Carbon dioxide, CO2 is increased due to the combustion process is enhanced.

  10. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems.

    PubMed

    Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E

    2011-12-22

    Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

  11. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems

    PubMed Central

    2011-01-01

    Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces. PMID:22369688

  12. An approach to estimate body dimensions through constant body ratio benchmarks.

    PubMed

    Chao, Wei-Cheng; Wang, Eric Min-Yang

    2010-12-01

    Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes

    NASA Astrophysics Data System (ADS)

    Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.

    2018-03-01

    A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.

  14. Toward milli-Newton electro- and magneto-static microactuators

    NASA Technical Reports Server (NTRS)

    Fan, Long-Sheng

    1993-01-01

    Microtechnologies can potentially push integrated electro- and magnetostatic actuators toward the regime where constant forces in the order of milli-Newton (or torques in the order of micro-Newton meter) can be generated with constant inputs within a volume of 1.0 x 1.0 x 0.02 mm with 'conventional' technology. 'Micro' actuators are, by definition, actuators with dimensions confined within a millimeter cube. Integrated microactuators based on electrostatics typically have force/torque in the order of sub-micro-Newton (sub-nano-Newton meter). These devices are capable of moving small objects at MHz frequencies. On the other hand, suppose we want to move a one cubic millimeter object around with 100 G acceleration; a few milli-Newton force will be required. Thus, milli-Newton microactuators are very desirable for some immediate applications, and it challenges micromechanical researchers to develop new process technologies, designs, and materials toward this goal.

  15. A facile growth mechanism, structural, optical, dielectric and electrical properties of ZnSe nanosphere via hydrothermal process

    NASA Astrophysics Data System (ADS)

    Javed, Qurat-Ul-Ain; Baqi, Sabah; Abbas, Hussain; Bibi, Maryam

    2017-02-01

    Hydrothermal method was chosen as a convenient method to fabricate zinc selenide (ZnSe) nanoparticle materials. The prepared nanospheres were characterized using X-ray diffraction (XRD) and scanning electron microscopy (SEM), where its different properties were observed using UV-visible spectroscopy and LCR meter. It was found that the pure ZnSe nanoparticles have a Zinc blende structure with crystallite size 10.91 nm and in a spherical form with average diameter of 35 nm (before sonication) and 18 nm (after sonication) with wide band gap of 4.28 eV. It was observed that there is inverse relation of frequency with dielectric constant and dielectric loss while AC conductivity grows up by increasing frequency. Such nanostructures were determined to be effectively used in optoelectronic devices as UV detector and in those devices where high-dielectric constant materials are required.

  16. Prediction of Chain Propagation Rate Constants of Polymerization Reactions in Aqueous NIPAM/BIS and VCL/BIS Systems.

    PubMed

    Kröger, Leif C; Kopp, Wassja A; Leonhard, Kai

    2017-04-06

    Microgels have a wide range of possible applications and are therefore studied with increasing interest. Nonetheless, the microgel synthesis process and some of the resulting properties of the microgels, such as the cross-linker distribution within the microgels, are not yet fully understood. An in-depth understanding of the synthesis process is crucial for designing tailored microgels with desired properties. In this work, rate constants and reaction enthalpies of chain propagation reactions in aqueous N-isopropylacrylamide/N,N'-methylenebisacrylamide and aqueous N-vinylcaprolactam/N,N'-methylenebisacrylamide systems are calculated to identify the possible sources of an inhomogeneous cross-linker distribution in the resulting microgels. Gas-phase reaction rate constants are calculated from B2PLYPD3/aug-cc-pVTZ energies and B3LYPD3/tzvp geometries and frequencies. Then, solvation effects based on COSMO-RS are incorporated into the rate constants to obtain the desired liquid-phase reaction rate constants. The rate constants agree with experiments within a factor of 2-10, and the reaction enthalpies deviate less than 5 kJ/mol. Further, the effect of rate constants on the microgel growth process is analyzed, and it is shown that differences in the magnitude of the reaction rate constants are a source of an inhomogeneous cross-linker distribution within the resulting microgel.

  17. Delegation versus empowerment: what, how, and is there a difference?

    PubMed

    McConnell, C R

    1995-09-01

    Delegation--or empowerment--represents the essence of the supervisory task: getting things done through people. The terms are no different from each other; empowerment is simply delegation done properly. The process still fails for the same old reasons, and failure still causes the same kinds of problems. Delegation or empowerment involves authority; it is authority that is delegated, not responsibility, as commonly claimed. Under either name it is an imperfect process requiring subjective judgments and chronic risk. Although either label is acceptable--the few differences between delegation and empowerment are semantic only--the significant constant that must be present is a sense of task ownership on the part of the empowered employee.

  18. Iterative categorization (IC): a systematic technique for analysing qualitative data

    PubMed Central

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  19. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    PubMed

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  20. The Langmuir isotherm: a commonly applied but misleading approach for the analysis of protein adsorption behavior.

    PubMed

    Latour, Robert A

    2015-03-01

    The Langmuir adsorption isotherm provides one of the simplest and most direct methods to quantify an adsorption process. Because isotherm data from protein adsorption studies often appear to be fit well by the Langmuir isotherm model, estimates of protein binding affinity have often been made from its use despite that fact that none of the conditions required for a Langmuir adsorption process may be satisfied for this type of application. The physical events that cause protein adsorption isotherms to often provide a Langmuir-shaped isotherm can be explained as being due to changes in adsorption-induced spreading, reorientation, clustering, and aggregation of the protein on a surface as a function of solution concentration in contrast to being due to a dynamic equilibrium adsorption process, which is required for Langmuir adsorption. Unless the requirements of the Langmuir adsorption process can be confirmed, fitting of the Langmuir model to protein adsorption isotherm data to obtain thermodynamic properties, such as the equilibrium constant for adsorption and adsorption free energy, may provide erroneous values that have little to do with the actual protein adsorption process, and should be avoided. In this article, a detailed analysis of the Langmuir isotherm model is presented along with a quantitative analysis of the level of error that can arise in derived parameters when the Langmuir isotherm is inappropriately applied to characterize a protein adsorption process. © 2014 Wiley Periodicals, Inc.

  1. Thermal requirements of Dermanyssus gallinae (De Geer, 1778) (Acari: Dermanyssidae).

    PubMed

    Tucci, Edna Clara; do Prado, Angelo P; de Araújo, Raquel Pires

    2008-01-01

    The thermal requirements for development of Dermanyssus gallinae were studied under laboratory conditions at 15, 20, 25, 30 and 35 degrees C, a 12h photoperiod and 60-85% RH. The thermal requirements for D. gallinae were as follows. Preoviposition: base temperature 3.4 degrees C, thermal constant (k) 562.85 degree-hours, determination coefficient (R(2)) 0.59, regression equation: Y= -0.006035 + 0.001777x. Egg: base temperature 10.60 degrees C, thermal constant (k) 689.65 degree-hours, determination coefficient (R(2)) 0.94, regression equation: Y= -0.015367 + 0.001450x. Larva: base temperature 9.82 degrees C, thermal constant (k) 464.91 degree-hours, determination coefficient (R(2)) 0.87, regression equation: Y= -0.021123 + 0.002151x. Protonymph: base temperature 10.17 degrees C, thermal constant (k) 504.49 degree-hours, determination coefficient (R(2)) 0.90, regression equation: Y= -0.020152 + 0.001982x. Deutonymph: base temperature 11.80 degrees C, thermal constant (k) 501.11 degree-hours, determination coefficient (R(2)) 0.99, regression equation: Y= -0.023555 + 0.001996x. The results obtained showed that 15 to 42 generations of Dermanyssus gallinae may occur during the year in the State of São Paulo, as estimated based on isotherm charts. Dermanyssus gallinae may develop continually in the State of São Paulo, with a population decrease in the winter. There were differences between the developmental stages of D. gallinae in relation to thermal requirements.

  2. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  3. Ratcheting Behavior of a Titanium-Stabilized Interstitial Free Steel

    NASA Astrophysics Data System (ADS)

    De, P. S.; Chakraborti, P. C.; Bhattacharya, B.; Shome, M.; Bhattacharjee, D.

    2013-05-01

    Engineering stress-control ratcheting behavior of a titanium-stabilized interstitial free steel has been studied under different combinations of mean stress and stress amplitude at a stress rate of 250 MPa s-1. Tests have been done up to 29.80 pct true ratcheting strain evolution in the specimens at three maximum stress levels. It is observed that this amount of ratcheting strain is more than the uniform tensile strain at a strain rate of 10-3 s-1 and evolves without showing tensile instability of the specimens. In the process of ratcheting strain evolution at constant maximum stresses, the effect of increasing stress amplitude is found to be more than that of increasing the mean stress component. Further, the constant maximum stress ratcheting test results reveal that the number of cycles ( N) required for 29.80 pct. true ratcheting strain evolution exponentially increases with increase of stress ratio ( R). Post-ratcheting tensile test results showing increase of strength and linear decrease in ductility with increasing R at different constant maximum stresses indicate that stress parameters used during ratcheting tests influence the size of the dislocation cell structure of the steel even with the same amount of ratcheting strain evolution. It is postulated that during ratcheting fatigue, damage becomes greater with the increase of R for any fixed amount of ratcheting strain evolution at constant maximum stress.

  4. Influences of diesel pilot injection on ethanol autoignition - a numerical analysis

    NASA Astrophysics Data System (ADS)

    Burnete, N. V.; Burnete, N.; Jurchis, B.; Iclodean, C.

    2017-10-01

    The aim of this study is to highlight the influences of the diesel pilot quantity as well as the timing on the autoignition of ethanol and the pollutant emissions resulting from the combustion process. The combustion concept presented in this paper requires the injection of a small quantity of diesel fuel in order to create the required autoignition conditions for ethanol. The combustion of the diesel droplets injected in the combustion chamber lead to the creation of high temperature locations that favour the autoignition of ethanol. However, due to the high vaporization enthalpy and the better distribution inside the combustion chamber of ethanol, the peak temperature values are reduced. Due to the lower temperature values and the high burning velocity of ethanol (combined with the fact that there are multiple ignition sources) the conditions required for the formation of nitric oxides are not achieved anymore, thus leading to significantly lower NOx emissions. This way the benefits of the Diesel engine and of the constant volume combustion are combined to enable a more efficient and environmentally friendly combustion process.

  5. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  6. The Arrhenius equation revisited.

    PubMed

    Peleg, Micha; Normand, Mark D; Corradini, Maria G

    2012-01-01

    The Arrhenius equation has been widely used as a model of the temperature effect on the rate of chemical reactions and biological processes in foods. Since the model requires that the rate increase monotonically with temperature, its applicability to enzymatic reactions and microbial growth, which have optimal temperature, is obviously limited. This is also true for microbial inactivation and chemical reactions that only start at an elevated temperature, and for complex processes and reactions that do not follow fixed order kinetics, that is, where the isothermal rate constant, however defined, is a function of both temperature and time. The linearity of the Arrhenius plot, that is, Ln[k(T)] vs. 1/T where T is in °K has been traditionally considered evidence of the model's validity. Consequently, the slope of the plot has been used to calculate the reaction or processes' "energy of activation," usually without independent verification. Many experimental and simulated rate constant vs. temperature relationships that yield linear Arrhenius plots can also be described by the simpler exponential model Ln[k(T)/k(T(reference))] = c(T-T(reference)). The use of the exponential model or similar empirical alternative would eliminate the confusing temperature axis inversion, the unnecessary compression of the temperature scale, and the need for kinetic assumptions that are hard to affirm in food systems. It would also eliminate the reference to the Universal gas constant in systems where a "mole" cannot be clearly identified. Unless proven otherwise by independent experiments, one cannot dismiss the notion that the apparent linearity of the Arrhenius plot in many food systems is due to a mathematical property of the model's equation rather than to the existence of a temperature independent "energy of activation." If T+273.16°C in the Arrhenius model's equation is replaced by T+b, where the numerical value of the arbitrary constant b is substantially larger than T and T(reference), the plot of Ln k(T) vs. 1/(T+b) will always appear almost perfectly linear. Both the modified Arrhenius model version having the arbitrary constant b, Ln[k(T)/k(T(reference)) = a[1/ (T(reference)+b)-1/ (T+b)], and the exponential model can faithfully describe temperature dependencies traditionally described by the Arrhenius equation without the assumption of a temperature independent "energy of activation." This is demonstrated mathematically and with computer simulations, and with reprocessed classical kinetic data and published food results.

  7. Constant versus variable response signal delays in speed--accuracy trade-offs: effects of advance preparation for processing time.

    PubMed

    Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf

    2008-07-01

    In two experiments, we used response signals (RSs) to control processing time and trace out speed--accuracy trade-off(SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is little effect of advance preparation for a given processing time, suggesting that the discrimination mechanisms underlying SAT functions are driven solely by bottom-up information processing in perceptual discrimination tasks.

  8. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  9. Constant-current control method of multi-function electromagnetic transmitter.

    PubMed

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  10. Constant-current control method of multi-function electromagnetic transmitter

    NASA Astrophysics Data System (ADS)

    Xue, Kaichang; Zhou, Fengdao; Wang, Shuang; Lin, Jun

    2015-02-01

    Based on the requirements of controlled source audio-frequency magnetotelluric, DC resistivity, and induced polarization, a constant-current control method is proposed. Using the required current waveforms in prospecting as a standard, the causes of current waveform distortion and current waveform distortion's effects on prospecting are analyzed. A cascaded topology is adopted to achieve 40 kW constant-current transmitter. The responsive speed and precision are analyzed. According to the power circuit of the transmitting system, the circuit structure of the pulse width modulation (PWM) constant-current controller is designed. After establishing the power circuit model of the transmitting system and the PWM constant-current controller model, analyzing the influence of ripple current, and designing an open-loop transfer function according to the amplitude-frequency characteristic curves, the parameters of the PWM constant-current controller are determined. The open-loop transfer function indicates that the loop gain is no less than 28 dB below 160 Hz, which assures the responsive speed of the transmitting system; the phase margin is 45°, which assures the stabilization of the transmitting system. Experimental results verify that the proposed constant-current control method can keep the control error below 4% and can effectively suppress load change caused by the capacitance of earth load.

  11. Modelling and analysis of creep deformation and fracture in a 1 Cr 1/2 Mo ferritic steel

    NASA Astrophysics Data System (ADS)

    Dyson, B. F.; Osgerby, D.

    A quantitative model, based upon a proposed new mechanism of creep deformation in particle-hardened alloys, has been validated by analysis of creep data from a 13CrMo 4 4 (1Cr 1/2 Mo) material tested under a range of stresses and temperatures. The methodology that has been used to extract the model parameters quantifies, as a first approximation, only the main degradation (damage) processes - in the case of the 1CR 1/2 Mo steel, these are considered to be the parallel operation of particle-coarsening and a progressively increasing stress due to a constant-load boundary condition. These 'global' model parameters can then be modified (only slightly) as required to obtain a detailed description and 'fit' to the rupture lifetime and strain/time trajectory of any individual test. The global model parameter approach may be thought of as predicting average behavior and the detailed fits as taking account of uncertainties (scatter) due to variability in the material. Using the global parameter dataset, predictions have also been made of behavior under biaxial stressing; constant straining rate; constant total strain (stress relaxation) and the likely success or otherwise of metallographic and mechanical remanent lifetime procedures.

  12. Effect of vibrationally excited oxygen on ozone production in the stratosphere

    NASA Technical Reports Server (NTRS)

    Patten, K. O., Jr.; Connell, P. S.; Kinnison, D. E.; Wuebbles, D. J.; Slanger, T. G.; Froidevaux, L.

    1994-01-01

    Photolysis of vibrationally excited oxygen produced by ultraviolet photolysis of ozone in the upper stratosphere is incorporated into the Lawrence Livermore National Laboratory two-dimensional zonally averaged chemical-radiative-transport model of the troposphere and stratosphere. The importance of this potential contributor of odd oxygen to the concentration of ozone is evaluated based on recent information on vibrational distributions of excited oxygen and on preliminary studies of energy transfer from the excited oxygen. When energy transfer rate constants similar to those of Toumi et al. (1991) are assumed, increases in model ozone concentrations of up to 4.0% in the upper stratosphere are found, and the model ozone concentrations are found to agree slightly better with measurements, including recent data from the Upper Atmosphere Research Satellite. However, the ozone increase is only 0.3% when the larger energy transfer rate constants indicated by recent experimental work are applied to the model. An ozone increase of 1% at 50 km requires energy transfer rate constants one-twentieth those of the preliminary observations. As a result, vibrationally excited oxygen processes probably do not contribute enough ozone to be significant in models of the upper stratosphere.

  13. MBE Growth, Characterization and Electronic Device Processing of HgCdTe, HgZnTe, Related Heterojunctions and HgCdTe-CdTe Superlattices

    DTIC Science & Technology

    1987-06-30

    metal lattice sites using the liquid phase epitaxy. However, group V elements have not been successfully Incorporated Into MBE grown HgCdTe layer as...narrow-gap side was first Both groups used the liquid pweepitaxy (LPE) growth made with a thicknem of 2 to 3/pm before the growth condi- technique and...higher quasiequilibrium pressure than with the shutter opened. This study shows that with the particular geometry 27 used the time constant required

  14. A fast passive and planar liquid sample micromixer.

    PubMed

    Melin, Jessica; Gimenéz, Guillem; Roxhed, Niclas; van der Wijngaart, Wouter; Stemme, Göran

    2004-06-01

    A novel microdevice for passively mixing liquid samples based on surface tension and a geometrical mixing chamber is presented. Due to the laminar flow regime on the microscale, mixing becomes difficult if not impossible. We present a micromixer where a constantly changing time dependent flow pattern inside a two sample liquid plug is created as the plug simply passes through the planar mixer chamber. The device requires no actuation during mixing and is fabricated using a single etch process. The effective mixing of two coloured liquid samples is demonstrated.

  15. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  16. Assessing Chemical Retention Process Controls in Ponds

    NASA Astrophysics Data System (ADS)

    Torgersen, T.; Branco, B.; John, B.

    2002-05-01

    Small ponds are a ubiquitous component of the landscape and have earned a reputation as effective chemical retention devices. The most common characterization of pond chemical retention is the retention coefficient, Ri= ([Ci]inflow-[Ci] outflow)/[Ci]inflow. However, this parameter varies widely in one pond with time and among ponds. We have re-evaluated literature reported (Borden et al., 1998) monthly average retention coefficients for two ponds in North Carolina. Employing a simple first order model that includes water residence time, the first order process responsible for species removal have been separated from the water residence time over which it acts. Assuming the rate constant for species removal is constant within the pond (arguable at least), the annual average rate constant for species removal is generated. Using the annual mean rate constant for species removal and monthly water residence times results in a significantly enhanced predictive capability for Davis Pond during most months of the year. Predictive ability remains poor in Davis Pond during winter/unstratified periods when internal loading of P and N results in low to negative chemical retention. Predictive ability for Piedmont Pond (which has numerous negative chemical retention periods) is improved but not to the same extent as Davis Pond. In Davis Pond, the rate constant for sediment removal (each month) is faster than the rate constant for water and explains the good predictability for sediment retention. However, the removal rate constant for P and N is slower than the removal rate constant for sediment (longer water column residence time for P,N than for sediment). Thus sedimentation is not an overall control on nutrient retention. Additionally, the removal rate constant for P is slower than for TOC (TOC is not the dominate removal process for P) and N is removed slower than P (different in pond controls). For Piedmont Pond, sediment removal rate constants are slower than the removal rate constant for water indicating significant sediment resuspension episodes. It appears that these sediment resuspension events are aperiodic and control the loading and the chemical retention capability of Piedmont Pond for N,P,TOC. These calculated rate constants reflect the differing internal loading processes for each component and suggest means and mechanisms for the use of ponds in water quality management.

  17. 40 CFR 211.210-2 - Labeling requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... constant); (ii) Ear cup volume or shape; (iii) Mounting of ear cup on head band; (iv) Ear cushion; (v... tension (spring constant); (ii) Mounting of plug on head band; (iii) Shape of plug; (iv) Material...

  18. Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.

    PubMed

    Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting

    2017-06-01

    Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.

  19. Clinical Course of Homozygous Hemoglobin Constant Spring in Pediatric Patients.

    PubMed

    Komvilaisak, Patcharee; Jetsrisuparb, Arunee; Fucharoen, Goonnapa; Komwilaisak, Ratana; Jirapradittha, Junya; Kiatchoosakun, Pakaphan

    2018-04-17

    Hemoglobin (Hb) Constant Spring is an alpha-globin gene variant due to a mutation of the stop codon resulting in the elongation of the encoded polypeptide from 141 to 172 amino acid residues. Patients with homozygous Hb Constant Spring are usually mildly anemic. We retrospectively describe clinical manifestations, diagnosis, laboratory investigations, treatment, and associated findings in pediatric patients with homozygous Hb Constant Spring followed-up at Srinagarind Hospital. Sixteen pediatric cases (5 males and 11 females) were diagnosed in utero (N=6) or postnatal (n=10). Eleven cases were diagnosed with homozygous Hb Constant Spring, 4 with homozygous Hb Constant Spring with heterozygous Hb E, and 1 with homozygous Hb Constant Spring with homozygous Hb E. Three cases were delivered preterm. Six patients had low birth weights. Clinical manifestations included fetal anemia in 6 cases, hepatomegaly in 1 case, hepatosplenomegaly in 2 cases, splenomegaly in 1 case. Twelve cases exhibited early neonatal jaundice, 9 of which required phototherapy. Six cases received red cell transfusions; 1 (3), >1 (3). After the first few months of life, almost all patients had mild microcytic hypochromic anemia and an increased reticulocyte count with a wide red cell distribution (RDW), but no longer required red cell transfusion. At 1 to 2 years of age, some patients still had mild microcytic hypochromic anemia and some had normocytic hypochromic anemia with Hb around 10 g/dL, increased reticulocyte count and wide RDW. Associated findings included hypothyroidism (2), congenital heart diseases (4), genitourinary abnormalities (3), gastrointestinal abnormalities (2), and developmental delay (1). Pediatric patients with homozygous Hb Constant Spring developed severe anemia in utero and up to the age of 2 to 3 months postnatal, requiring blood transfusions. Subsequently, their anemia was mild with no evidence of hepatosplenomegaly. Their Hb level was above 9 g/dL with hypochromic microcytic blood pictures as well as wide RDW. Blood transfusions have not been necessary since then.

  20. An Analytic Form for the Interresponse Time Analysis of Shull, Gaynor, and Grimes with Applications and Extensions

    ERIC Educational Resources Information Center

    Kessel, Robert; Lucke, Robert L.

    2008-01-01

    Shull, Gaynor and Grimes advanced a model for interresponse time distribution using probabilistic cycling between a higher-rate and a lower-rate response process. Both response processes are assumed to be random in time with a constant rate. The cycling between the two processes is assumed to have a constant transition probability that is…

  1. Process and Microstructure to Achieve Ultra-high Dielectric Constant in Ceramic-Polymer Composites.

    PubMed

    Zhang, Lin; Shan, Xiaobing; Bass, Patrick; Tong, Yang; Rolin, Terry D; Hill, Curtis W; Brewer, Jeffrey C; Tucker, Dennis S; Cheng, Z-Y

    2016-10-21

    Influences of process conditions on microstructure and dielectric properties of ceramic-polymer composites are systematically studied using CaCu 3 Ti 4 O 12 (CCTO) as filler and P(VDF-TrFE) 55/45 mol.% copolymer as the matrix by combining solution-cast and hot-pressing processes. It is found that the dielectric constant of the composites can be significantly enhanced-up to about 10 times - by using proper processing conditions. The dielectric constant of the composites can reach more than 1,000 over a wide temperature range with a low loss (tan δ ~ 10 -1 ). It is concluded that besides the dense structure of composites, the uniform distribution of the CCTO particles in the matrix plays a key role on the dielectric enhancement. Due to the influence of the CCTO on the microstructure of the polymer matrix, the composites exhibit a weaker temperature dependence of the dielectric constant than the polymer matrix. Based on the results, it is also found that the loss of the composites at low temperatures, including room temperature, is determined by the real dielectric relaxation processes including the relaxation process induced by the mixing.

  2. Process and Microstructure to Achieve Ultra-high Dielectric Constant in Ceramic-Polymer Composites

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Shan, Xiaobing; Bass, Patrick; Tong, Yang; Rolin, Terry D.; Hill, Curtis W.; Brewer, Jeffrey C.; Tucker, Dennis S.; Cheng, Z.-Y.

    2016-10-01

    Influences of process conditions on microstructure and dielectric properties of ceramic-polymer composites are systematically studied using CaCu3Ti4O12 (CCTO) as filler and P(VDF-TrFE) 55/45 mol.% copolymer as the matrix by combining solution-cast and hot-pressing processes. It is found that the dielectric constant of the composites can be significantly enhanced-up to about 10 times - by using proper processing conditions. The dielectric constant of the composites can reach more than 1,000 over a wide temperature range with a low loss (tan δ ~ 10-1). It is concluded that besides the dense structure of composites, the uniform distribution of the CCTO particles in the matrix plays a key role on the dielectric enhancement. Due to the influence of the CCTO on the microstructure of the polymer matrix, the composites exhibit a weaker temperature dependence of the dielectric constant than the polymer matrix. Based on the results, it is also found that the loss of the composites at low temperatures, including room temperature, is determined by the real dielectric relaxation processes including the relaxation process induced by the mixing.

  3. Process and Microstructure to Achieve Ultra-high Dielectric Constant in Ceramic-Polymer Composites

    PubMed Central

    Zhang, Lin; Shan, Xiaobing; Bass, Patrick; Tong, Yang; Rolin, Terry D.; Hill, Curtis W.; Brewer, Jeffrey C.; Tucker, Dennis S.; Cheng, Z.-Y.

    2016-01-01

    Influences of process conditions on microstructure and dielectric properties of ceramic-polymer composites are systematically studied using CaCu3Ti4O12 (CCTO) as filler and P(VDF-TrFE) 55/45 mol.% copolymer as the matrix by combining solution-cast and hot-pressing processes. It is found that the dielectric constant of the composites can be significantly enhanced–up to about 10 times – by using proper processing conditions. The dielectric constant of the composites can reach more than 1,000 over a wide temperature range with a low loss (tan δ ~ 10−1). It is concluded that besides the dense structure of composites, the uniform distribution of the CCTO particles in the matrix plays a key role on the dielectric enhancement. Due to the influence of the CCTO on the microstructure of the polymer matrix, the composites exhibit a weaker temperature dependence of the dielectric constant than the polymer matrix. Based on the results, it is also found that the loss of the composites at low temperatures, including room temperature, is determined by the real dielectric relaxation processes including the relaxation process induced by the mixing. PMID:27767184

  4. Applied and engineering versions of the theory of elastoplastic processes of active complex loading part 2: Identification and verification

    NASA Astrophysics Data System (ADS)

    Peleshko, V. A.

    2016-06-01

    The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.

  5. Relationship of compressive stress-strain response of engineering materials obtained at constant engineering and true strain rates

    DOE PAGES

    Song, Bo; Sanborn, Brett

    2018-05-07

    In this paper, a Johnson–Cook model was used as an example to analyze the relationship of compressive stress-strain response of engineering materials experimentally obtained at constant engineering and true strain rates. There was a minimal deviation between the stress-strain curves obtained at the same constant engineering and true strain rates. The stress-strain curves obtained at either constant engineering or true strain rates could be converted from one to the other, which both represented the intrinsic material response. There is no need to specify the testing requirement of constant engineering or true strain rates for material property characterization, provided that eithermore » constant engineering or constant true strain rate is attained during the experiment.« less

  6. Relationship of compressive stress-strain response of engineering materials obtained at constant engineering and true strain rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Bo; Sanborn, Brett

    In this paper, a Johnson–Cook model was used as an example to analyze the relationship of compressive stress-strain response of engineering materials experimentally obtained at constant engineering and true strain rates. There was a minimal deviation between the stress-strain curves obtained at the same constant engineering and true strain rates. The stress-strain curves obtained at either constant engineering or true strain rates could be converted from one to the other, which both represented the intrinsic material response. There is no need to specify the testing requirement of constant engineering or true strain rates for material property characterization, provided that eithermore » constant engineering or constant true strain rate is attained during the experiment.« less

  7. Constant-Time Pattern Matching For Real-Time Production Systems

    NASA Astrophysics Data System (ADS)

    Parson, Dale E.; Blank, Glenn D.

    1989-03-01

    Many intelligent systems must respond to sensory data or critical environmental conditions in fixed, predictable time. Rule-based systems, including those based on the efficient Rete matching algorithm, cannot guarantee this result. Improvement in execution-time efficiency is not all that is needed here; it is important to ensure constant, 0(1) time limits for portions of the matching process. Our approach is inspired by two observations about human performance. First, cognitive psychologists distinguish between automatic and controlled processing. Analogously, we partition the matching process across two networks. The first is the automatic partition; it is characterized by predictable 0(1) time and space complexity, lack of persistent memory, and is reactive in nature. The second is the controlled partition; it includes the search-based goal-driven and data-driven processing typical of most production system programming. The former is responsible for recognition and response to critical environmental conditions. The latter is responsible for the more flexible problem-solving behaviors consistent with the notion of intelligence. Support for learning and refining the automatic partition can be placed in the controlled partition. Our second observation is that people are able to attend to more critical stimuli or requirements selectively. Our match algorithm uses priorities to focus matching. It compares priority of information during matching, rather than deferring this comparison until conflict resolution. Messages from the automatic partition are able to interrupt the controlled partition, enhancing system responsiveness. Our algorithm has numerous applications for systems that must exhibit time-constrained behavior.

  8. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    DTIC Science & Technology

    2012-01-01

    constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on structural components made of high...different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on...ADDRESS(ES) Naval Surface Warfare Center,4104Evans Way Suite 102,Indian Head,MD,20640 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  9. Effect of Cold Temperature on the Dielectric Constant of Soil

    DTIC Science & Technology

    2012-04-01

    explosive device (IED) threats is ground-penetrating radar ( GPR ). Proper development of GPR technology for this application requires a unique...success or failure of GPR as a detection technique. One soil property of interest to radar engineers is the dielectric constant. Previous...results to temperatures, moisture levels, and frequencies relevant to GPR systems. 2. Dielectric Constant and the Ring-resonator Concept The two

  10. Simple constant-current-regulated power supply

    NASA Technical Reports Server (NTRS)

    Priebe, D. H. E.; Sturman, J. C.

    1977-01-01

    Supply incorporates soft-start circuit that slowly ramps current up to set point at turn-on. Supply consists of full-wave rectifier, regulating pass transistor, current feedback circuit, and quad single-supply operational-amplifier circuit providing control. Technique is applicable to any system requiring constant dc current, such as vacuum tube equipment, heaters, or battery charges; it has been used to supply constant current for instrument calibration.

  11. Product manufacturing, quality, and reliability initiatives to maintain a competitive advantage and meet customer expectations in the semiconductor industry

    NASA Astrophysics Data System (ADS)

    Capps, Gregory

    Semiconductor products are manufactured and consumed across the world. The semiconductor industry is constantly striving to manufacture products with greater performance, improved efficiency, less energy consumption, smaller feature sizes, thinner gate oxides, and faster speeds. Customers have pushed towards zero defects and require a more reliable, higher quality product than ever before. Manufacturers are required to improve yields, reduce operating costs, and increase revenue to maintain a competitive advantage. Opportunities exist for integrated circuit (IC) customers and manufacturers to work together and independently to reduce costs, eliminate waste, reduce defects, reduce warranty returns, and improve quality. This project focuses on electrical over-stress (EOS) and re-test okay (RTOK), two top failure return mechanisms, which both make great defect reduction opportunities in customer-manufacturer relationship. Proactive continuous improvement initiatives and methodologies are addressed with emphasis on product life cycle, manufacturing processes, test, statistical process control (SPC), industry best practices, customer education, and customer-manufacturer interaction.

  12. Effects of selective attention on perceptual filling-in.

    PubMed

    De Weerd, P; Smith, E; Greenberg, P

    2006-03-01

    After few seconds, a figure steadily presented in peripheral vision becomes perceptually filled-in by its background, as if it "disappeared". We report that directing attention to the color, shape, or location of a figure increased the probability of perceiving filling-in compared to unattended figures, without modifying the time required for filling-in. This effect could be augmented by boosting attention. Furthermore, the frequency distribution of filling-in response times for attended figures could be predicted by multiplying the frequencies of response times for unattended figures with a constant. We propose that, after failure of figure-ground segregation, the neural interpolation processes that produce perceptual filling-in are enhanced in attended figure regions. As filling-in processes are involved in surface perception, the present study demonstrates that even very early visual processes are subject to modulation by cognitive factors.

  13. STORMSeq: an open-source, user-friendly pipeline for processing personal genomics data in the cloud.

    PubMed

    Karczewski, Konrad J; Fernald, Guy Haskin; Martin, Alicia R; Snyder, Michael; Tatonetti, Nicholas P; Dudley, Joel T

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5-10 hours to process a full exome sequence and $30 and 3-8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2.

  14. Atypical biological motion kinematics are represented by complementary lower-level and top-down processes during imitation learning.

    PubMed

    Hayes, Spencer J; Dutoy, Chris A; Elliott, Digby; Gowen, Emma; Bennett, Simon J

    2016-01-01

    Learning a novel movement requires a new set of kinematics to be represented by the sensorimotor system. This is often accomplished through imitation learning where lower-level sensorimotor processes are suggested to represent the biological motion kinematics associated with an observed movement. Top-down factors have the potential to influence this process based on the social context, attention and salience, and the goal of the movement. In order to further examine the potential interaction between lower-level and top-down processes in imitation learning, the aim of this study was to systematically control the mediating effects during an imitation of biological motion protocol. In this protocol, we used non-human agent models that displayed different novel atypical biological motion kinematics, as well as a control model that displayed constant velocity. Importantly the three models had the same movement amplitude and movement time. Also, the motion kinematics were displayed in the presence, or absence, of end-state-targets. Kinematic analyses showed atypical biological motion kinematics were imitated, and that this performance was different from the constant velocity control condition. Although the imitation of atypical biological motion kinematics was not modulated by the end-state-targets, movement time was more accurate in the absence, compared to the presence, of an end-state-target. The fact that end-state targets modulated movement time accuracy, but not biological motion kinematics, indicates imitation learning involves top-down attentional, and lower-level sensorimotor systems, which operate as complementary processes mediated by the environmental context. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Context generalization in Drosophila visual learning requires the mushroom bodies

    NASA Astrophysics Data System (ADS)

    Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin

    1999-08-01

    The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.

  16. Ellipsis and discourse coherence

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2006-01-01

    VP-ellipsis generally requires a syntactically matching antecedent. However, many documented examples exist where the antecedent is not appropriate. Kehler (2000, 2002) proposed an elegant theory which predicts a syntactic antecedent for an elided VP is required only for a certain discourse coherence relation (resemblance) not for cause-effect relations. Most of the data Kehler used to motivate his theory come from corpus studies and thus do not consist of true minimal pairs. We report five experiments testing predictions of the coherence theory, using standard minimal pair materials. The results raise questions about the empirical basis for coherence theory because parallelism is preferred for all coherence relations, not just resemblance relations. Further, strict identity readings, which should not be available when a syntactic antecedent is required, are influenced by parallelism per se, holding the discourse coherence relation constant. This draws into question the causal role of coherence relations in processing VP ellipsis. PMID:16896367

  17. Heating-Rate-Coupled Model for Hydrogen Reduction of JSC-1A

    NASA Technical Reports Server (NTRS)

    Hegde, U.; Balasubramaniam, R.; Gokoglu, S. A.

    2010-01-01

    A previously developed and validated model for hydrogen reduction of JSC-1A for a constant reaction-bed temperature is extended to account for reaction during the bed heat-up period. A quasisteady approximation is used wherein an expression is derived for a single average temperature of reaction during the heat-up process by employing an Arrhenius expression for regolith conversion. Subsequently, the regolith conversion during the heat-up period is obtained by using this representative temperature. Accounting for the reaction during heat-up provides a better estimate of the reaction time needed at the desired regolith-bed operating temperature. Implications for the efficiency of the process, as measured by the energy required per unit mass of oxygen produced, are also indicated.

  18. Catalysis of CO₂ absorption in aqueous solution by inorganic oxoanions and their application to post combustion capture.

    PubMed

    Phan, Duong T; Maeder, Marcel; Burns, Robert C; Puxty, Graeme

    2014-04-15

    To reduce CO2 emission into the atmosphere, particularly from coal-fired power stations, post combustion capture (PCC) using amine-based solvents to chemically absorb CO2 has been extensively developed. From an infrastructure viewpoint, the faster the absorption of CO2, the smaller the absorber required. The use of catalysts for this process has been broadly studied. In this manuscript, a study of the catalytic efficiencies of inorganic oxoanions such as arsenite, arsenate, phosphite, phosphate, and borate is described. The kinetics of the accelerated CO2 absorption at 25 °C was investigated using stopped-flow spectrophotometry. The catalytic rate constants of these anions for the reaction of CO2 with H2O were determined to be 137.7(3), 30.3(7), 69(2), 32.7(9), and 13.66(7) M(-1)s(-1), respectively. A new mechanism for the catalytic reaction of oxoanions with CO2 has also been proposed. The applicability of these catalysts to PCC was further studied by simulation of the absorption process under PCC conditions using their experimental catalytic rate constants. Arsenite and phosphite were confirmed to be the best catalysts for CO2 capture. However, considering the toxicological effect of arsenic and the oxidative instability of phosphite, phosphate would be the most promising inorganic catalyst for PCC process from the series of inorganic oxoanions studied.

  19. Digibaro pressure instrument onboard the Phoenix Lander

    NASA Astrophysics Data System (ADS)

    Harri, A.-M.; Polkko, J.; Kahanpää, H. H.; Schmidt, W.; Genzer, M. M.; Haukka, H.; Savijarv1, H.; Kauhanen, J.

    2009-04-01

    The Phoenix Lander landed successfully on the Martian northern polar region. The mission is part of the National Aeronautics and Space Administration's (NASA's) Scout program. Pressure observations onboard the Phoenix lander were performed by an FMI (Finnish Meteorological Institute) instrument, based on a silicon diaphragm sensor head manufactured by Vaisala Inc., combined with MDA data processing electronics. The pressure instrument performed successfully throughout the Phoenix mission. The pressure instrument had 3 pressure sensor heads. One of these was the primary sensor head and the other two were used for monitoring the condition of the primary sensor head during the mission. During the mission the primary sensor was read with a sampling interval of 2 s and the other two were read less frequently as a check of instrument health. The pressure sensor system had a real-time data-processing and calibration algorithm that allowed the removal of temperature dependent calibration effects. In the same manner as the temperature sensor, a total of 256 data records (8.53 min) were buffered and they could either be stored at full resolution, or processed to provide mean, standard deviation, maximum and minimum values for storage on the Phoenix Lander's Meteorological (MET) unit.The time constant was approximately 3s due to locational constraints and dust filtering requirements. Using algorithms compensating for the time constant effect the temporal resolution was good enough to detect pressure drops associated with the passage of nearby dust devils.

  20. Ring rotational speed trend analysis by FEM approach in a Ring Rolling process

    NASA Astrophysics Data System (ADS)

    Allegri, G.; Giorleo, L.; Ceretti, E.

    2018-05-01

    Ring Rolling is an advanced local incremental forming technology to fabricate directly precise seamless ring-shape parts with various dimensions and materials. In this process two different deformations occur in order to reduce the width and the height of a preform hollow ring; as results a diameter expansion is obtained. In order to guarantee a uniform deformation, the preform is forced toward the Driver Roll whose aim is to transmit the rotation to the ring. The ring rotational speed selection is fundamental because the higher is the speed the higher will be the axial symmetry of the deformation process. However, it is important to underline that the rotational speed will affect not only the final ring geometry but also the loads and energy needed to produce it. Despite this importance in industrial environment, usually, a constant value for the Driver Roll angular velocity is set so to result in a decreasing trend law for the ring rotational speed. The main risk due to this approach is not fulfilling the axial symmetric constrain (due to the diameter expansion) and to generate a high localized ring section deformation. In order to improve the knowledge about this topic in the present paper three different ring rotational speed trends (constant, linearly increasing and linearly decreasing) were investigated by FEM approach. Results were compared in terms of geometrical and dimensional analysis, loads and energies required.

  1. Preliminary engineering design of sodium-cooled CANDLE core

    NASA Astrophysics Data System (ADS)

    Takaki, Naoyuki; Namekawa, Azuma; Yoda, Tomoyuki; Mizutani, Akihiko; Sekimoto, Hiroshi

    2012-06-01

    The CANDLE burning process is characterized by the autonomous shifting of burning region with constant reactivity and constant spacial power distribution. Evaluations of such critical burning process by using widely used neutron diffusion and burning codes under some realistic engineering constraints are valuable to confirm the technical feasibility of the CANDLE concept and to put the idea into concrete core design. In the first part of this paper, it is discussed that whether the sustainable and stable CANDLE burning process can be reproduced even by using conventional core analysis tools such as SLAROM and CITATION-FBR. As a result, it is certainly possible to demonstrate it if the proper core configuration and initial fuel composition required as CANDLE core are applied to the analysis. In the latter part, an example of a concrete image of sodium cooled, metal fuel, 2000MWt rating CANDLE core has been presented by assuming an emerging inevitable technology of recladding. The core satisfies engineering design criteria including cladding temperature, pressure drop, linear heat rate, and cumulative damage fraction (CDF) of cladding, fast neutron fluence and sodium void reactivity which are defined in the Japanese FBR design project. It can be concluded that it is feasible to design CADLE core by using conventional codes while satisfying some realistic engineering design constraints assuming that recladding at certain time interval is technically feasible.

  2. Performance and component frontal areas of a hypothetical two-spool turbojet engine for three modes of operation

    NASA Technical Reports Server (NTRS)

    Dugan, James F , Jr

    1955-01-01

    Engine performance is better for constant outer-spool mechanical-speed operation than for constant inner-spool mechanical-speed operation over most of the flight range considered. Combustor and afterburner frontal areas are about the same for the two modes. Engine performance for a mode characterized by a constant outer-spool equivalent speed over part of the flight range and a constant outer-spool mechanical speed over the rest of the flight range is better that that for constant outer-spool mechanical speed operation. The former mode requires larger outer-spool centrifugal stresses and larger component frontal areas.

  3. A strategic approach to physico-chemical analysis of bis (thiourea) lead chloride - A reliable semi-organic nonlinear optical crystal

    NASA Astrophysics Data System (ADS)

    Rajagopalan, N. R.; Krishnamoorthy, P.; Jayamoorthy, K.

    2017-03-01

    Good quality crystals of bis thiourea lead chloride (BTLC) have been grown by slow evaporation method from aqueous solution. Orthorhombic structure and Pna21 space group of the crystals have been identified by single crystal X-ray diffraction. Studies on nucleation kinetics of grown BTLC has been carried out from which meta-stable zone width, induction period, free energy change, critical radius, critical number and growth rate have been calculated. The experimental values of interfacial surface energy for the crystal growth process have been compared with theoretical models. Ultra violet transmittance studies resulted in a high transmittance and wide band gap energy suggested the required optical transparency of the crystal. The second harmonic generation (SHG) and phase matching nature of the crystal have been justified by Kurtz-Perry method. The SHG nature of the crystal has been further attested by the higher values of theoretical hyper polarizability. The dielectric nature of the crystals at different temperatures with varying frequencies has been thoroughly studied. The activation energy values of the electrical process have been calculated from ac conductivity study. Solid state parameters including valence electron plasma energy, Penn gap, Fermi energy and polarisability have been unveiled by theoretical approach and correlated with the crystal's SHG efficiency. The values of hardness number, elastic stiffness constant, Meyer's Index, minimum level of indentation load, load dependent constant, fracture toughness, brittleness index and corrected hardness obtained from Vicker's hardness test clearly showed that the BTLC crystal has good mechanical stability required for NLO device fabrication.

  4. Test Standard Developed for Determining the Slow Crack Growth of Advanced Ceramics at Ambient Temperature

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Salem, Jonathan A.

    1998-01-01

    The service life of structural ceramic components is often limited by the process of slow crack growth. Therefore, it is important to develop an appropriate testing methodology for accurately determining the slow crack growth design parameters necessary for component life prediction. In addition, an appropriate test methodology can be used to determine the influences of component processing variables and composition on the slow crack growth and strength behavior of newly developed materials, thus allowing the component process to be tailored and optimized to specific needs. At the NASA Lewis Research Center, work to develop a standard test method to determine the slow crack growth parameters of advanced ceramics was initiated by the authors in early 1994 in the C 28 (Advanced Ceramics) committee of the American Society for Testing and Materials (ASTM). After about 2 years of required balloting, the draft written by the authors was approved and established as a new ASTM test standard: ASTM C 1368-97, Standard Test Method for Determination of Slow Crack Growth Parameters of Advanced Ceramics by Constant Stress-Rate Flexural Testing at Ambient Temperature. Briefly, the test method uses constant stress-rate testing to determine strengths as a function of stress rate at ambient temperature. Strengths are measured in a routine manner at four or more stress rates by applying constant displacement or loading rates. The slow crack growth parameters required for design are then estimated from a relationship between strength and stress rate. This new standard will be published in the Annual Book of ASTM Standards, Vol. 15.01, in 1998. Currently, a companion draft ASTM standard for determination of the slow crack growth parameters of advanced ceramics at elevated temperatures is being prepared by the authors and will be presented to the committee by the middle of 1998. Consequently, Lewis will maintain an active leadership role in advanced ceramics standardization within ASTM. In addition, the authors have been and are involved with several international standardization organizations including the Versailles Project on Advanced Materials and Standards (VAMAS), the International Energy Agency (IEA), and the International Organization for Standardization (ISO). The associated standardization activities involve fracture toughness, strength, elastic modulus, and the machining of advanced ceramics.

  5. Enhanced centrifuge-based approach to powder characterization

    NASA Astrophysics Data System (ADS)

    Thomas, Myles Calvin

    Many types of manufacturing processes involve powders and are affected by powder behavior. It is highly desirable to implement tools that allow the behavior of bulk powder to be predicted based on the behavior of only small quantities of powder. Such descriptions can enable engineers to significantly improve the performance of powder processing and formulation steps. In this work, an enhancement of the centrifuge technique is proposed as a means of powder characterization. This enhanced method uses specially designed substrates with hemispherical indentations within the centrifuge. The method was tested using simulations of the momentum balance at the substrate surface. Initial simulations were performed with an ideal powder containing smooth, spherical particles distributed on substrates designed with indentations. The van der Waals adhesion between the powder, whose size distribution was based on an experimentally-determined distribution from a commercial silica powder, and the indentations was calculated and compared to the removal force created in the centrifuge. This provided a way to relate the powder size distribution to the rotational speed required for particle removal for various indentation sizes. Due to the distinct form of the data from these simulations, the cumulative size distribution of the powder and the Hamaker constant for the system were be extracted. After establishing adhesion force characterization for an ideal powder, the same proof-of-concept procedure was followed for a more realistic system with a simulated rough powder modeled as spheres with sinusoidal protrusions and intrusions around the surface. From these simulations, it was discovered that an equivalent powder of smooth spherical particles could be used to describe the adhesion behavior of the rough spherical powder by establishing a size-dependent 'effective' Hamaker constant distribution. This development made it possible to describe the surface roughness effects of the entire powder through one adjustable parameter that was linked to the size distribution. It is important to note that when the engineered substrates (hemispherical indentations) were applied, it was possible to extract both powder size distribution and effective Hamaker constant information from the simulated centrifuge adhesion experiments. Experimental validation of the simulated technique was performed with a silica powder dispersed onto a stainless steel substrate with no engineered surface features. Though the proof-of-concept work was accomplished for indented substrates, non-ideal, relatively flat (non-indented) substrates were used experimentally to demonstrate that the technique can be extended to this case. The experimental data was then used within the newly developed simulation procedure to show its application to real systems. In the absence of engineered features on the substrates, it was necessary to specify the size distribution of the powder as an input to the simulator. With this information, it was possible to extract an effective Hamaker constant distribution and when the effective Hamaker constant distribution was applied in conjunction with the size distribution, the observed adhesion force distribution was described precisely. An equation was developed that related the normalized effective Hamaker constants (normalized by the particle diameter) to the particle diameter was formulated from the effective Hamaker constant distribution. It was shown, by application of the equation, that the adhesion behavior of an ideal (smooth, spherical) powder with an experimentally-validated, effective Hamaker constant distribution could be used to effectively represent that of a realistic powder. Thus, the roughness effects and size variations of a real powder are captured in this one distributed parameter (effective Hamaker constant distribution) which provides a substantial improvement to the existing technique. This can lead to better optimization of powder processing by enhancing powder behavior models.

  6. The QUIRO Study (assurance of quality and innovation in radiooncology): methodology, instruments and practices.

    PubMed

    Dunst, J; Willich, N; Sack, H; Engenhart-Cabillic, R; Budach, V; Popp, W

    2014-02-01

    The QUIRO study aimed to establish a secure level of quality and innovation in radiation oncology. Over 6 years, 27 specific surveys were conducted at 24 radiooncological departments. In all, 36 renowned experts from the field of radiation oncology (mostly head physicians and full professors) supported the realization of the study. A salient feature of the chosen methodological approach is the "process" as a means of systematizing diversified medical-technical procedures according to standardized criteria. On the one hand, "processes" as a tool of translation are adapted for creating and transforming standards into concrete clinical and medical actions; on the other hand, they provide the basis for standardized instruments and methods to determine the required needs of physicians, staff, and equipment. In the foreground of the collection and measurement of resource requirements were the processes of direct service provision which were subdivided into modules for reasons of clarity and comprehensibility. Overhead tasks (i.e., participation in quality management) were excluded from the main study and examined in a separate survey with appropriate methods. After the exploration of guidelines, tumor- or indication-specific examination and treatment processes were developed in expert workshops. Moreover, those specific modules were defined which characterize these entities and indications in a special degree. Afterwards, these modules were compiled according to their time and resources required in the "reference institution", i.e., in specialized and as competent recognized departments (mostly from the university area), by various suitable survey methods. The significance of the QUIRO study and the validity of the results were optimized in a process of constant improvements and comprehensive checks. As a consequence, the QUIRO study yields representative results concerning the resource requirement for specialized, qualitatively and technologically highly sophisticated radiooncologic treatment in Germany.

  7. A Kalman filter approach for the determination of celestial reference frames

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.

  8. Synergic effects of 10°/s constant rotation and rotating background on visual cognitive processing

    NASA Astrophysics Data System (ADS)

    He, Siyang; Cao, Yi; Zhao, Qi; Tan, Cheng; Niu, Dongbin

    In previous studies we have found that constant low-speed rotation facilitated the auditory cognitive process and constant velocity rotation background sped up the perception, recognition and assessment process of visual stimuli. In the condition of constant low-speed rotation body is exposed into a new physical state. In this study the variations of human brain's cognitive process under the complex condition of constant low-speed rotation and visual rotation backgrounds with different speed were explored. 14 university students participated in the ex-periment. EEG signals were recorded when they were performing three different cognitive tasks with increasing mental load, that is no response task, selective switch responses task and selec-tive mental arithmetic task. Rotary chair was used to create constant low-speed10/srotation. Four kinds of background were used in this experiment, they were normal black background and constant 30o /s, 45o /s or 60o /s rotating simulated star background. The P1 and N1 compo-nents of brain event-related potentials (ERP) were analyzed to detect the early visual cognitive processing changes. It was found that compared with task performed under other backgrounds, the posterior P1 and N1 latencies were shortened under 45o /s rotating background in all kinds of cognitive tasks. In the no response task, compared with task performed under black back-ground, the posterior N1 latencies were delayed under 30o /s rotating background. In the selec-tive switch responses task and selective mental arithmetic task, compared with task performed under other background, the P1 latencies were lengthened under 60o /s rotating background, but the average amplitudes of the posterior P1 and N1 were increased. It was suggested that under constant 10/s rotation, the facilitated effect of rotating visual background were changed to an inhibited one in 30o /s rotating background. Under vestibular new environment, not all of the rotating backgrounds accelerated the early process of visual cognition. There is a synergic effect between the effects of constant low-speed rotation and rotating speed of the background. Under certain conditions, they both served to facilitate the visual cognitive processing, and it had been started at the stage when extrastriate cortex perceiving the visual signal. Under the condition of constant low-speed rotation in higher cognitive load tasks, the rapid rotation of the background enhanced the magnitude of the signal transmission in the visual path, making signal to noise ratio increased and a higher signal to noise ratio is clearly in favor of target perception and recognition. This gave rise to the hypothesis that higher cognitive load tasks with higher top-down control had more power in counteracting the inhibition effect of higher velocity rotation background. Acknowledgements: This project was supported by National Natural Science Foundation of China (No. 30670715) and National High Technology Research and Development Program of China (No.2007AA04Z254).

  9. Measurement of myeloid maturation by flow cytochemistry in HL-60 leukemia: esterase is inducible, myeloperoxidase is not.

    PubMed

    Ross, D W

    1986-05-01

    The phenomenon of leukemic cell maturation requires a measurement of myeloid maturation to understand the process and to exploit it as a means of therapy for leukemia. The HL-60 leukemic cell line was used as a model of induced leukemic cell maturation in order to develop a method of quantitating granulocytic and monocytic maturation in response to drug therapy. An automated flow cytochemistry system (Hemalog-D) was employed to measure mean cell volume, myeloperoxidase (MPO), and nonspecific esterase (NSE). For granulocytic maturation induced by vitamin A or DMSO, MPO and cell volume decreased by 50%, maintaining a constant mean cellular MPO concentration throughout maturation from promyelocyte to neutrophil-like forms. For monocytic maturation induced by low-dose ARA-c, the mean NSE increased substantially, while cell volume remained constant. Unlike MPO concentration, NSE was truly inducible and thus a useful quantitative measure of maturation caused by low-dose ARA-c. Flow cytochemistry and cytofluorometry may be developed to allow for quantitative monitoring of therapeutic trials of induced maturation in human leukemias. However, this will require adapting these techniques to the complexity of human leukemias in vivo, and the necessity of handling heterogeneous populations encountered in bone marrow samples.

  10. Statistical analysis and interpolation of compositional data in materials science.

    PubMed

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  11. Design of a resistive exercise device for use on the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Carlson, Dennis L.; Durrani, Mohammed; Redilla, Christi L.

    1992-01-01

    The National Aeronautics and Space Administration in conjunction with the Universities Space Research Association sponsored the design of a Resistive Exercise Device (RED) for use on the Space Shuttle. The device must enable the astronauts to perform a number of exercises to prevent skeletal muscle atrophy and neuromuscular deconditioning in microgravity environments. The RED must fit the requirements for limited volume and weight and must provide a means of restraint during exercise. The design team divided the functions of the device into three major groups: methods of supplying force, methods of adjusting force, and methods of transmitting the force to the user. After analyzing the three main functions of the RED and developing alternatives for each, the design team used a comparative decision process to choose the most feasible components for the overall design. The design team selected the constant force spring alternative for further embodiment. The device consists of an array of different sized constant force springs which can be pinned in different combinations to produce the required output forces. The force is transmitted by means of a shaft and gear system. The final report is divided into four sections. An introduction section discusses the sponsor background, problem background and requirements of the device. The second section covers the alternative designs for each of the main functions. The design solution and pertinent calculations comprises the third section. The final section contains design conclusions and recommendations including topics of future work.

  12. Photophysical and photochemical insights into the photodegradation of sulfapyridine in water: A joint experimental and theoretical study.

    PubMed

    Zhang, Heming; Wei, Xiaoxuan; Song, Xuedan; Shah, Shaheen; Chen, Jingwen; Liu, Jianhui; Hao, Ce; Chen, Zhongfang

    2018-01-01

    For organic pollutants, photodegradation, as a major abiotic elimination process and of great importance to the environmental fate and risk, involves rather complicated physical and chemical processes of excited molecules. Herein, we systematically studied the photophysical and photochemical processes of a widely used antibiotic, namely sulfapyridine. By means of density functional theory (DFT) computations, we examined the rate constants and the competition of both photophysical and photochemical processes, elucidated the photochemical reaction mechanism, calculated reaction quantum yield (Φ) based on both photophysical and photochemical processes, and subsequently estimated the photodegradation rate constant. We further conducted photolysis experiments to measure the photodegradation rate constant of sulfapyridine. Our computations showed that sulfapyridine at the lowest excited singlet state (S 1 ) mainly undergoes internal conversion to its ground state, and is difficult to transfer to the lowest excited triplet states (T 1 ) via intersystem crossing (ISC) and emit fluorescence. In T 1 state, compared with phosphorescence emission and ISC, chemical reaction is much easier to initiate. Encouragingly, the theoretically predicted photodegradation rate constant is close to the experimentally observed value, indicating that quantum chemistry computation is powerful enough to study photodegradation involving ultra-fast photophysical and photochemical processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A study of Schwarz converters for nuclear powered spacecraft

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.; Schwarze, Gene E.

    1987-01-01

    High power space systems which use low dc voltage, high current sources such as thermoelectric generators, will most likely require high voltage conversion for transmission purposes. This study considers the use of the Schwarz resonant converter for use as the basic building block to accomplish this low-to-high voltage conversion for either a dc or an ac spacecraft bus. The Schwarz converter has the important assets of both inherent fault tolerance and resonant operation; parallel operation in modular form is possible. A regulated dc spacecraft bus requires only a single stage converter while a constant frequency ac bus requires a cascaded Schwarz converter configuration. If the power system requires constant output power from the dc generator, then a second converter is required to route unneeded power to a ballast load.

  14. GROUND WATER ISSUE - CALCULATION AND USE OF FIRST-ORDER RATE CONSTANTS FOR MONITORED NATURAL ATTENUATION STUDIES

    EPA Science Inventory

    This issue paper explains when and how to apply first-order attenuation rate constant calculations in monitored natural attenuation (MNA) studies. First-order attenuation rate constant calculations can be an important tool for evaluating natural attenuation processes at ground-wa...

  15. High Throughput pharmacokinetic modeling using computationally predicted parameter values: dissociation constants (TDS)

    EPA Science Inventory

    Estimates of the ionization association and dissociation constant (pKa) are vital to modeling the pharmacokinetic behavior of chemicals in vivo. Methodologies for the prediction of compound sequestration in specific tissues using partition coefficients require a parameter that ch...

  16. Studies on gamma irradiated rubber materials

    NASA Astrophysics Data System (ADS)

    Lungu, I. B.; Stelescu, M. D.; Cutrubinis, M.

    2018-01-01

    Due to the increase in use and production of polymer materials, there is a constant pressure of finding a solution to more environmental friendly composites. Beside the constant effort of recycling used materials, it seems more appropriate to manufacture and use biodegradable and renewable row materials. Natural polymers like starch, cellulose, lignin etc are ideal for preparing biodegradable composites. Some of the dynamic markets that use polymer materials are the food and pharmaceutical industries. Because of their desinfastation and sometimes sterility requirements, different treatment processes are applied, one of it being radiation treatment. The scope of this paper is to analyze the mechanical behaviour of rubber based materials irradiated with gamma rays at four medium doses, 30.1 kGy, 60.6 kGy, 91 kGy and 121.8 kGy. The objectives are the following: to identify the optimum radiation dose in order to obtain a good mechanical behaviour and to identify the mechanical behaviour of the material when adding different quantities of natural filler (20 phr, 60 phr and 100 phr).

  17. Immunoadolescence: Neuroimmune development and adolescent behavior

    PubMed Central

    Brenhouse, Heather C.; Schwarz, Jaclyn M.

    2016-01-01

    The brain is increasingly appreciated to be a constantly rewired organ that yields age-specific behaviors and responses to the environment. Adolescence in particular is a unique period characterized by continued brain maturation, superimposed with transient needs of the organism to traverse a leap from parental dependence to independence. Here we describe how these needs require immune maturation, as well as brain maturation. Our immune system, which protects us from pathogens and regulates inflammation, is in constant communication with our nervous system. Together, neuro-immune signaling regulates our behavioral responses to the environment, making this interaction a likely substrate for adolescent development. We review here the identified as well as understudied components of neuro-immune interactions during adolescence. Synaptic pruning, neurite outgrowth, and neurotransmitter release during adolescence all regulate—and are regulated by—immune signals, which occur via blood-brain barrier dynamics and glial activity. We discuss these processes, as well as how immune signaling during this transitional period of development confers differential effects on behavior and vulnerability to mental illness. PMID:27260127

  18. Effect of wet tropospheric path delays on estimation of geodetic baselines in the Gulf of California using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Tralli, David M.; Dixon, Timothy H.; Stephens, Scott A.

    1988-01-01

    Surface Meteorological (SM) and Water Vapor Radiometer (WVR) measurements are used to provide an independent means of calibrating the GPS signal for the wet tropospheric path delay in a study of geodetic baseline measurements in the Gulf of California using GPS in which high tropospheric water vapor content yielded wet path delays in excess of 20 cm at zenith. Residual wet delays at zenith are estimated as constants and as first-order exponentially correlated stochastic processes. Calibration with WVR data is found to yield the best repeatabilities, with improved results possible if combined carrier phase and pseudorange data are used. Although SM measurements can introduce significant errors in baseline solutions if used with a simple atmospheric model and estimation of residual zenith delays as constants, SM calibration and stochastic estimation for residual zenith wet delays may be adequate for precise estimation of GPS baselines. For dry locations, WVRs may not be required to accurately model tropospheric effects on GPS baselines.

  19. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  20. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  1. Continuous treatment of high strength wastewaters using air-cathode microbial fuel cells.

    PubMed

    Kim, Kyoung-Yeol; Yang, Wulin; Evans, Patrick J; Logan, Bruce E

    2016-12-01

    Treatment of low strength wastewaters using microbial fuel cells (MFCs) has been effective at hydraulic retention times (HRTs) similar to aerobic processes, but treatment of high strength wastewaters can require longer HRTs. The use of two air-cathode MFCs hydraulically connected in series was examined to continuously treat high strength swine wastewater (7-8g/L of chemical oxygen demand) at an HRT of 16.7h. The maximum power density of 750±70mW/m 2 was produced after 12daysof operation. However, power decreased by 85% after 185d of operation due to serious cathode fouling. COD removal was improved by using a lower external resistance, and COD removal rates were substantially higher than those previously reported for a low strength wastewater. However, removal rates were inconsistent with first order kinetics as the calculated rate constant was an order of magnitude lower than rate constant for the low strength wastewater. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Modeling of composite beams and plates for static and dynamic analysis

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Sutyrin, Vladislav G.; Lee, Bok Woo

    1993-01-01

    The main purpose of this research was to develop a rigorous theory and corresponding computational algorithms for through-the-thickness analysis of composite plates. This type of analysis is needed in order to find the elastic stiffness constants for a plate and to post-process the resulting plate solution in order to find approximate three-dimensional displacement, strain, and stress distributions throughout the plate. This also requires the development of finite deformation plate equations which are compatible with the through-the-thickness analyses. After about one year's work, we settled on the variational-asymptotical method (VAM) as a suitable framework in which to solve these types of problems. VAM was applied to laminated plates with constant thickness in the work of Atilgan and Hodges. The corresponding geometrically nonlinear global deformation analysis of plates was developed by Hodges, Atilgan, and Danielson. A different application of VAM, along with numerical results, was obtained by Hodges, Lee, and Atilgan. An expanded version of this last paper was submitted for publication in the AIAA Journal.

  3. Constraints on cosmic silicates

    NASA Astrophysics Data System (ADS)

    Ossenkopf, V.; Henning, Th.; Mathis, J. S.

    1992-08-01

    Observational determinations of opacities of circumstellar silicates, relative to the peak value near 10 microns, are used to estimate the optical constants n and k, the real and imaginary parts of the index of refraction. Circumstellar dust is modified by processing within the interstellar medium. This leads to higher band strengths and a somewhat larger ratio of the opacities at the 18 and 10-micron peaks, compared with circumstellar silicates. By using an effective-medium theory, we calculate the effects of small spherical inclusions of various materials (various oxides, sulfides, carbides, amorphous carbon, and metallic iron) upon silicate opacities. Some of these can increase the absorption coefficient k in the 2-8 micron region appreciably, as is needed to reconcile laboratory silicate opacities with observations of both the interstellar medium and envelopes around late-type stars. We give tables of two sets of optical constants for warm oxygen-deficient and cool oxygen-rich silicates, representative for circumstellar and interstellar silicates. The required opacity in the 2-8 micron region is provided by iron and magnetite.

  4. Theoretical studies of the decomposition mechanisms of 1,2,4-butanetriol trinitrate.

    PubMed

    Pei, Liguan; Dong, Kehai; Tang, Yanhui; Zhang, Bo; Yu, Chang; Li, Wenzuo

    2017-12-06

    Density functional theory (DFT) and canonical variational transition-state theory combined with a small-curvature tunneling correction (CVT/SCT) were used to explore the decomposition mechanisms of 1,2,4-butanetriol trinitrate (BTTN) in detail. The results showed that the γ-H abstraction reaction is the initial pathway for autocatalytic BTTN decomposition. The three possible hydrogen atom abstraction reactions are all exothermic. The rate constants for autocatalytic BTTN decomposition are 3 to 10 40 times greater than the rate constants for the two unimolecular decomposition reactions (O-NO 2 cleavage and HONO elimination). The process of BTTN decomposition can be divided into two stages according to whether the NO 2 concentration is above a threshold value. HONO elimination is the main reaction channel during the first stage because autocatalytic decomposition requires NO 2 and the concentration of NO 2 is initially low. As the reaction proceeds, the concentration of NO 2 gradually increases; when it exceeds the threshold value, the second stage begins, with autocatalytic decomposition becoming the main reaction channel.

  5. Can the Dielectric Constant of Fullerene Derivatives Be Enhanced by Side-Chain Manipulation? A Predictive First-Principles Computational Study.

    PubMed

    Sami, Selim; Haase, Pi A B; Alessandri, Riccardo; Broer, Ria; Havenith, Remco W A

    2018-04-19

    The low efficiency of organic photovoltaic (OPV) devices has often been attributed to the strong Coulombic interactions between the electron and hole, impeding the charge separation process. Recently, it has been argued that by increasing the dielectric constant of materials used in OPVs, this strong interaction could be screened. In this work, we report the application of periodic density functional theory together with the coupled perturbed Kohn-Sham method to calculate the electronic contribution to the dielectric constant for fullerene C 60 derivatives, a ubiquitous class of molecules in the field of OPVs. The results show good agreement with experimental data when available and also reveal an important undesirable outcome when manipulating the side chain to maximize the static dielectric constant: in all cases, the electronic contribution to the dielectric constant decreases as the side chain increases in size. This information should encourage both theoreticians and experimentalists to further investigate the relevance of contributions to the dielectric constant from slower processes like vibrations and dipolar reorientations for facilitating the charge separation, because electronically, enlarging the side chain of conventional fullerene derivatives only lowers the dielectric constant, and consequently, their electronic dielectric constant is upper bound by the one of C 60 .

  6. Can the Dielectric Constant of Fullerene Derivatives Be Enhanced by Side-Chain Manipulation? A Predictive First-Principles Computational Study

    PubMed Central

    2018-01-01

    The low efficiency of organic photovoltaic (OPV) devices has often been attributed to the strong Coulombic interactions between the electron and hole, impeding the charge separation process. Recently, it has been argued that by increasing the dielectric constant of materials used in OPVs, this strong interaction could be screened. In this work, we report the application of periodic density functional theory together with the coupled perturbed Kohn–Sham method to calculate the electronic contribution to the dielectric constant for fullerene C60 derivatives, a ubiquitous class of molecules in the field of OPVs. The results show good agreement with experimental data when available and also reveal an important undesirable outcome when manipulating the side chain to maximize the static dielectric constant: in all cases, the electronic contribution to the dielectric constant decreases as the side chain increases in size. This information should encourage both theoreticians and experimentalists to further investigate the relevance of contributions to the dielectric constant from slower processes like vibrations and dipolar reorientations for facilitating the charge separation, because electronically, enlarging the side chain of conventional fullerene derivatives only lowers the dielectric constant, and consequently, their electronic dielectric constant is upper bound by the one of C60. PMID:29561616

  7. Examination of the formation process of pre-solvated and solvated electron in n-alcohol using femtosecond pulse radiolysis

    NASA Astrophysics Data System (ADS)

    Toigawa, Tomohiro; Gohdo, Masao; Norizawa, Kimihiro; Kondoh, Takafumi; Kan, Koichi; Yang, Jinfeng; Yoshida, Yoichi

    2016-06-01

    The formation process of pre-solvated and solvated electron in methanol (MeOH), ethanol (EtOH), n-butanol (BuOH), and n-octanol (OcOH) were investigated using a fs-pulse radiolysis technique by observing the pre-solvated electron at 1400 nm. The formation time constants of the pre-solvated electrons were determined to be 1.2, 2.2, 3.1, and 6.3 ps for MeOH, EtOH, BuOH, and OcOH, respectively. The formation time constants of the solvated electrons were determined to be 6.7, 13.6, 22.2, and 32.9 ps for MeOH, EtOH, BuOH, and OcOH, respectively. The formation dynamics and structure of the pre-solvated and solvated electrons in n-alcohols were discussed based on relation between the obtained time constant and dielectric relaxation time constant from the view point of kinetics. The observed formation time constants of the solvated electrons seemed to be strongly correlated with the second component of the dielectric relaxation time constants, which are related to single molecule motion. On the other hand, the observed formation time constants of the pre-solvated electrons seemed to be strongly correlated with the third component of the dielectric relaxation time constants, which are related to dynamics of hydrogen bonds.

  8. Developing and Sustaining Recovery-Orientation in Mental Health Practice: Experiences of Occupational Therapists.

    PubMed

    Nugent, Alexandra; Hancock, Nicola; Honey, Anne

    2017-01-01

    Internationally, mental health policy requires clinicians to shift from a medical to a recovery-oriented approach. However, there is a significant lag in the translation of policy into practice. Occupational therapists have been identified as ideally situated to be recovery-oriented yet limited research exploring how they do this exists. This study aimed to explore Australian occupational therapists' experiences of developing and sustaining recovery-orientation in mental health practice. Semistructured, in-depth interviews were conducted with twelve occupational therapists working across different mental health service types. Participants identified themselves as being recovery-oriented. Data were analysed using constant comparative analysis. Occupational therapists described recovery-oriented practice as an active, ongoing, and intentional process of seeking out knowledge, finding fit between understandings of recovery-oriented practice and their professional identity, holding hope, and developing confidence through clinical reasoning. Human and systemic aspects of therapists' workplace environment influenced this process. Being a recovery-oriented occupational therapist requires more than merely accepting a specific framework. It requires commitment and ongoing work to develop and sustain recovery-orientation. Occupational therapists are called to extend current leadership activity beyond their workplace and to advocate for broader systemic change.

  9. Use Zircon-Ilmenite Concentrate in Steelmaking

    NASA Astrophysics Data System (ADS)

    Fedoseev, S. N.; Volkova, T. N.

    2016-08-01

    Market requirements cause a constant search for new materials and technologies, for their immediate use in increasing requirements for material and energy efficiency, as well as to the quality of steel. In practice, steel production in the tended recently of more stringent requirements for the chemical composition of the steel and its contamination by nonmetallic inclusions, gas and non-ferrous metals. The main ways of increasing of strength and performance characteristics fabricated metal products related to the profound and effective influence on the crystallizing metal structure by furnace processing of the melt with refining and modifying additives. It can be argued that the furnace processing of steel and iron chemically active metals (alkali-earth metals, rare-earth metals, and others.) is an integral part of modern production of high quality products and competitive technologies. Important condition for development of methods secondary metallurgy of steel is the use of relatively inexpensive materials in a variety of complex alloys and blends, allowing targeted control of physical and chemical state of the molten metal and, therefore, receive steel with improved performance. In this connection the development of modifying natural materials metallurgy technologies presented complex ores containing titanium and zirconium, is a very urgent task.

  10. Cosmological Constant: A Lesson from Bose-Einstein Condensates

    NASA Astrophysics Data System (ADS)

    Finazzi, Stefano; Liberati, Stefano; Sindoni, Lorenzo

    2012-02-01

    The cosmological constant is one of the most pressing problems in modern physics. We address this issue from an emergent gravity standpoint, by using an analogue gravity model. Indeed, the dynamics of the emergent metric in a Bose-Einstein condensate can be described by a Poisson-like equation with a vacuum source term reminiscent of a cosmological constant. The direct computation of this term shows that in emergent gravity scenarios this constant may be naturally much smaller than the naive ground-state energy of the emergent effective field theory. This suggests that a proper computation of the cosmological constant would require a detailed understanding about how Einstein equations emerge from the full microscopic quantum theory. In this light, the cosmological constant appears as a decisive test bench for any quantum or emergent gravity scenario.

  11. Cosmological constant: a lesson from Bose-Einstein condensates.

    PubMed

    Finazzi, Stefano; Liberati, Stefano; Sindoni, Lorenzo

    2012-02-17

    The cosmological constant is one of the most pressing problems in modern physics. We address this issue from an emergent gravity standpoint, by using an analogue gravity model. Indeed, the dynamics of the emergent metric in a Bose-Einstein condensate can be described by a Poisson-like equation with a vacuum source term reminiscent of a cosmological constant. The direct computation of this term shows that in emergent gravity scenarios this constant may be naturally much smaller than the naive ground-state energy of the emergent effective field theory. This suggests that a proper computation of the cosmological constant would require a detailed understanding about how Einstein equations emerge from the full microscopic quantum theory. In this light, the cosmological constant appears as a decisive test bench for any quantum or emergent gravity scenario.

  12. Practical use of a word processor in a histopathology laboratory.

    PubMed Central

    Briggs, J C; Ibrahim, N B; Mackintosh, I; Norris, D

    1982-01-01

    Some of the facilities available with a commercially purchased word processing program, linked to a DEC PDP 11/23 computer are described, together with an account of the practical histopathological use. The system is based on a share of the computer with a Clinical Chemistry Department. Development was time-consuming and required the constant availability of the Department of Physics. However, once working, considerable saving in secretarial time has resulted and a number of projects have been started which would not have been contemplated without the use of the word processor and its linked computer. Images PMID:7068906

  13. Electrodeposition of Gold to Conformally Fill High Aspect Ratio Nanometric Silicon Grating Trenches: A Comparison of Pulsed and Direct Current Protocols

    PubMed Central

    Znati, Sami A.; Chedid, Nicholas; Miao, Houxun; Chen, Lei; Bennett, Eric E.; Wen, Han

    2016-01-01

    Filling high-aspect-ratio trenches with gold is a frequent requirement in the fabrication of x-ray optics as well as micro-electronic components and other fabrication processes. Conformal electrodeposition of gold in sub-micron-width silicon trenches with an aspect ratio greater than 35 over a grating area of several square centimeters is challenging and has not been described in the literature previously. A comparison of pulsed plating and constant current plating led to a gold electroplating protocol that reliably filled trenches for such structures. PMID:27042384

  14. Aerosol Combustion Synthesis of Nanopowders and Processing to Functional Thin Films

    NASA Astrophysics Data System (ADS)

    Yi, Eongyu

    In this dissertation, the advantages of liquid-feed flame spray pyrolysis (LF-FSP) process in producing nanoparticles (NPs) as well as processing the produced NPs to ceramic/polymer nanocomposite films and high density polycrystalline ceramic films are demonstrated. The LF-FSP process aerosolizes alcohol solutions of metalloorganic precursors by oxygen and combusts them at > 1500 °C. The combustion products are rapidly quenched ( 10s of ms) to < 400 °C, producing NPs with the same compositions as those of the precursor solutions. The high specific surface areas of NPs enable formulation of ceramic/polymer/interface(phase) ternary nanocomposites in which the interphase can be the determining factor of the final net properties. In ceramic processing, NPs show increased sinterability and provide access to small average grain sizes with fine control of microstructures, compared to when micron sized powders are used. Therefore, synthesis, processing, and characterization of NPs, NP derived nanocomposites and ceramic monoliths are of great interest. We first compare the LF-FSP to commercial FSP process by producing fumed silica. Combusting spirocyclic alkoxysilanes or Si(OEt)4 by LF-FSP process produced fumed silica very similar to SiCl4 derived products. Given LF-FSP approach does not require the containment constraints of the SiCl4 process and precursors are synthesized from rice hull ash, the reported approach represents a sustainable, green and potentially lower cost alternative. We then show the versatility of NPs in formulating flexible ceramic/polymer nanocomposites (BaTiO3/epoxy) with superior properties. Volume fractions of the BaTiO3 filler and composite film thicknesses were controlled to adjust the net dielectric constant and the capacitance. Measured net dielectric constants further deviated from theory, with increasing solids loadings, due to NP agglomeration. Wound nanocomposite capacitors showed ten times higher capacitance compared to the commercial counterpart. Following series of studies explore the use of flame made NPs in processing Li+ conducting membranes. Systematic doping studies were conducted in the LiTi2(PO4)3 system to modify the lattice constant, conduction channel width, and sintering behavior by introducing Al3+ and Si4+ dopants. Excess Li2O content was also adjusted to observe its effect on final microstructures and phase compositions. Improved densification rates were found in Li1.7 Al0.3Ti1.7Si0.4P2.6O 12 composition and thin films (52+/-1 microm) with conductivities of 0.3-0.5 mS cm-1 were achieved. Li6.25M0.25La3Zr2O12 (M = Al3+, Ga3+) thin films (25-28 microm) with conductivities of 0.2-1.3 mS cm-1 were also successfully processed using flame made NPs, overcoming processing challenges extant, resulting in significantly reduced energy input required for densification. Heating schedules, sintering atmospheres, and types of substrates were controlled to observe their effect on the sintering behavior. Furthermore, green film thicknesses were found to be a crucial variable determining the final microstructures and phase compositions due to the varying Li2O loss rates with change in thicknesses (surface/volume ratios). Using fully decomposed NP mixtures (Li2CO3/off-stoichiometric La2Zr2O 7), as obtained by LF-FSP, provides an ideal approach to use high surface/reaction energy and liquid phase sintering to drive densification.

  15. A Printed Equilibrium Dialysis Device with Integrated Membranes for Improved Binding Affinity Measurements.

    PubMed

    Pinger, Cody W; Heller, Andrew A; Spence, Dana M

    2017-07-18

    Equilibrium dialysis is a simple and effective technique used for investigating the binding of small molecules and ions to proteins. A three-dimensional (3D) printer was used to create a device capable of measuring binding constants between a protein and a small ion based on equilibrium dialysis. Specifically, the technology described here enables the user to customize an equilibrium dialysis device to fit their own experiments by choosing membranes of various material and molecular-weight cutoff values. The device has dimensions similar to that of a standard 96-well plate, thus being amenable to automated sample handlers and multichannel pipettes. The device consists of a printed base that hosts multiple windows containing a porous regenerated-cellulose membrane with a molecular-weight cutoff of ∼3500 Da. A key step in the fabrication process is a print-pause-print approach for integrating membranes directly into the windows subsequently inserted into the base. The integrated membranes display no leaking upon placement into the base. After characterizing the system's requirements for reaching equilibrium, the device was used to successfully measure an equilibrium dissociation constant for Zn 2+ and human serum albumin (K d = (5.62 ± 0.93) × 10 -7 M) under physiological conditions that is statistically equal to the constants reported in the literature.

  16. QCD Axion Dark Matter with a Small Decay Constant

    NASA Astrophysics Data System (ADS)

    Co, Raymond T.; Hall, Lawrence J.; Harigaya, Keisuke

    2018-05-01

    The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant fa˜O (1011) GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires fa˜(108- 1011) GeV . The axions may be warm enough to give deviations from cold dark matter in large scale structure.

  17. Identification and Characterization of a Dendritic Cell Precursor in Parenchymal Lung Tissue.

    PubMed

    von Garnier, Christophe; Blank, Fabian; Rothen-Rutishauser, Barbara; Goethert, Joachim R; Holt, Patrick G; Stumbles, Philip A; Strickland, Deborah H

    2017-03-01

    The pulmonary parenchymal and mucosal microenvironments are constantly exposed to the external environment and thus require continuous surveillance to maintain steady-state immunological homeostasis. This is achieved by a mobile network of pulmonary dendritic cells (DC) and macrophages (mø) that constantly sample and process microenvironmental antigens into signals that can initiate or dampen inflammation, either locally or after onward migration to draining lymph nodes. The constant steady-state turnover of pulmonary DC and mø requires replenishment from bone marrow precursors; however, the nature of the pulmonary precursor cell (PC) remains unclear, although recent studies suggest that subsets of pulmonary DC may derive from circulating monocytic precursors. In the current study, we describe a population of cells in steady-state mouse lung tissue that has the surface phenotypic and ultrastructural characteristics of a common DC progenitor. Irradiation and reconstitution studies confirmed the bone marrow origins of this PC and showed that it had rapid depletion and reconstitution kinetics that were similar to those of DC, with a 50% repopulation by donor-derived cells by Days 7-9 after reconstitution. This was significantly faster than the rates observed for mø, which showed 50% repopulation by donor-derived cells beyond Days 16-21 after reconstitution. Purified PC gained antigen-presenting function and a cell surface phenotype similar to that of pulmonary DC after maturation in vitro, with light and electron microscopy confirming a myeloid DC morphology. To the best of our knowledge, this is the first study to describe a PC for DC in lung tissue; the findings have implications for the restoration of pulmonary immunological homeostasis after bone marrow transplant.

  18. New insights into neurogenic cyclic motor activity in the isolated guinea-pig colon.

    PubMed

    Costa, M; Wiklendt, L; Keightley, L; Brookes, S J H; Dinning, P G; Spencer, N J

    2017-10-01

    The contents of the guinea pig distal colon consist of multiple pellets that move anally in a coordinated manner. This row of pellets results in continued distention of the colon. In this study, we have investigated quantitatively the features of the neurally dependent colonic motor patterns that are evoked by constant distension of the full length of guinea-pig colon. Constant distension was applied to the excised guinea-pig by high-resolution manometry catheters or by a series of hooks. Constant distension elicited regular Cyclic Motor Complexes (CMCs) that originated at multiple different sites along the colon and propagated in an oral or anal direction extending distances of 18.3±10.3 cm. CMCs were blocked by tetrodotoxin (TTX; 0.6 μ mol L -1 ), hexamethonium (100 μ mol L -1 ) or hyoscine (1 μ mol L -1 ). Application of TTX in a localized compartment or cutting the gut circumferentially disrupted the spatial continuity of CMCs. Localized smooth muscle contraction was not required for CMC propagation. Shortening the length of the preparations or disruption of circumferential pathways reduced the integrity and continuity of CMCs. CMCs are a distinctive neurally dependent cyclic motor pattern, that emerge with distension over long lengths of the distal colon. They do not require changes in muscle tension or contractility to entrain the neural activity underlying CMC propagation. CMCs are likely to play an important role interacting with the neuromechanical processes that time the propulsion of multiple natural pellets and may be particularly relevant in conditions of impaction or obstruction, where long segments of colon are simultaneously distended. © 2017 John Wiley & Sons Ltd.

  19. Predicting Cost and Schedule Growth for Military and Civil Space Systems

    DTIC Science & Technology

    2008-03-01

    the Shapiro-Wilk Test , and testing the residuals for constant variance using the Breusch - Pagan test . For logistic models, diagnostics include...the Breusch - Pagan Test . With this test , a p-value below 0.05 rejects the null hypothesis that the residuals have constant variance. Thus, similar...to the Shapiro- Wilk Test , because the optimal model will have constant variance of its residuals, this requires Breusch - Pagan p-values over 0.05

  20. A new approach using coagulation rate constant for evaluation of turbidity removal

    NASA Astrophysics Data System (ADS)

    Al-Sameraiy, Mukheled

    2017-06-01

    Coagulation-flocculation-sedimentation processes for treating three levels of bentonite synthetic turbid water using date seeds (DS) and alum (A) coagulants were investigated in the previous research work. In the current research, the same experimental results were used to adopt a new approach on a basis of using coagulation rate constant as an investigating parameter to identify optimum doses of these coagulants. Moreover, the performance of these coagulants to meet (WHO) turbidity standard was assessed by introducing a new evaluating criterion in terms of critical coagulation rate constant (kc). Coagulation rate constants (k2) were mathematically calculated in second order form of coagulation process for each coagulant. The maximum (k2) values corresponded to doses, which were obviously to be considered as optimum doses. The proposed criterion to assess the performance of coagulation process of these coagulants was based on the mathematical representation of (WHO) turbidity guidelines in second order form of coagulation process stated that (k2) for each coagulant should be ≥ (kc) for each level of synthetic turbid water. For all tested turbid water, DS coagulant could not satisfy it. While, A coagulant could satisfy it. The results obtained in the present research are exactly in agreement with the previous published results in terms of finding optimum doses for each coagulant and assessing their performances. On the whole, it is recommended considering coagulation rate constant to be a new approach as an indicator for investigating optimum doses and critical coagulation rate constant to be a new evaluating criterion to assess coagulants' performance.

  1. EFFECTIVE ACIDITY CONSTANT BEHAVIOR NEAR ZERO CHARGE CONDITIONS

    EPA Science Inventory

    Surface site (>SOH group) acidity reactions require expressions of the form: Ka = [>SOHn-1(z-1)]aH+EXP(-DG/RT)/[>SOHnz] (where all variables have their usual meaning). One can rearrange this expression to generate an effective acidity constant historically defined as: Qa = Ka...

  2. A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality

    NASA Astrophysics Data System (ADS)

    Liu, Li; Zhuang, Xinhua

    2009-01-01

    It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.

  3. Radii effect on the translation spring constant of force transducer beams

    NASA Technical Reports Server (NTRS)

    Scott, C. E.

    1992-01-01

    Multi-component strain-gage force transducer design requires the designer to determine the spring constant of the numerous beams or flexures incorporated in the transducer. The classical beam deflection formulae that are used in calculating these spring constants typically assume that the beam has a uniform moment of inertia along the entire beam length. In practice all beams have a radius at the end where the beam interfaces with the shoulder of the transducer, and on short beams in particular this increases the beam spring constant considerably. A Basic computer program utilizing numerical integration is presented to determine this effect.

  4. [Computers in nursing: development of free software application with care and management].

    PubMed

    dos Santos, Sérgio Ribeiro

    2010-06-01

    This study aimed at developing an information system in nursing with the implementation of nursing care and management of the service. The SisEnf--Information System in Nursing--is a free software module that comprises the care of nursing: history, clinical examination and care plan; the management module consists of: service shifts, personnel management, hospital indicators and other elements. The system was implemented at the Medical Clinic of the Lauro Wanderley University Hospital, at Universidade Federal da Paraiba. In view of the need to bring user and developer closer, in addition to the constant change of functional requirements during the interactive process, the method of unified process was used. The SisEnf was developed on a WEB platform and using free software. Hence, the work developed aimed at assisting in the working process of nursing, which will now have the opportunity to incorporate information technology in their work routine.

  5. Energy Reconstruction for Events Detected in TES X-ray Detectors

    NASA Astrophysics Data System (ADS)

    Ceballos, M. T.; Cardiel, N.; Cobo, B.

    2015-09-01

    The processing of the X-ray events detected by a TES (Transition Edge Sensor) device (such as the one that will be proposed in the ESA AO call for instruments for the Athena mission (Nandra et al. 2013) as a high spectral resolution instrument, X-IFU (Barret et al. 2013)), is a several step procedure that starts with the detection of the current pulses in a noisy signal and ends up with their energy reconstruction. For this last stage, an energy calibration process is required to convert the pseudo energies measured in the detector to the real energies of the incoming photons, accounting for possible nonlinearity effects in the detector. We present the details of the energy calibration algorithm we implemented as the last part of the Event Processing software that we are developing for the X-IFU instrument, that permits the calculation of the calibration constants in an analytical way.

  6. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.

    PubMed

    de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-11

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.

  7. Solar Ion Processing of Itokawa Grains: Reconciling Model Predictions with Sample Observations

    NASA Technical Reports Server (NTRS)

    Christoffersen, Roy; Keller, L. P.

    2014-01-01

    Analytical TEM observations of Itokawa grains reported to date show complex solar wind ion processing effects in the outer 30-100 nm of pyroxene and olivine grains. The effects include loss of long-range structural order, formation of isolated interval cavities or "bubbles", and other nanoscale compositional/microstructural variations. None of the effects so far described have, however, included complete ion-induced amorphization. To link the array of observed relationships to grain surface exposure times, we have adapted our previous numerical model for progressive solar ion processing effects in lunar regolith grains to the Itokawa samples. The model uses SRIM ion collision damage and implantation calculations within a framework of a constant-deposited-energy model for amorphization. Inputs include experimentally-measured amorphization fluences, a Pi steradian variable ion incidence geometry required for a rotating asteroid, and a numerical flux-versus-velocity solar wind spectrum.

  8. STORMSeq: An Open-Source, User-Friendly Pipeline for Processing Personal Genomics Data in the Cloud

    PubMed Central

    Karczewski, Konrad J.; Fernald, Guy Haskin; Martin, Alicia R.; Snyder, Michael; Tatonetti, Nicholas P.; Dudley, Joel T.

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5–10 hours to process a full exome sequence and $30 and 3–8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2. PMID:24454756

  9. Balancing glycolysis and mitochondrial OXPHOS: lessons from the hematopoietic system and exercising muscles.

    PubMed

    Haran, Michal; Gross, Atan

    2014-11-01

    Living organisms require a constant supply of safe and efficient energy to maintain homeostasis and to allow locomotion of single cells, tissues and the entire organism. The source of energy can be glycolysis, a simple series of enzymatic reactions in the cytosol, or a much more complex process in the mitochondria, oxidative phosphorylation (OXPHOS). In this review we will examine how does the organism balance its source of energy in two seemingly distinct and unrelated processes: hematopoiesis and exercise. In both processes we will show the importance of the metabolic program and its regulation. We will also discuss the importance of oxygen availability not as a sole determinant, but in the context of the nutrient and cellular state, and address the emerging role of lactate as an energy source and signaling molecule in health and disease. Copyright © 2014 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  10. Large-scale production of human pluripotent stem cell derived cardiomyocytes.

    PubMed

    Kempf, Henning; Andree, Birgit; Zweigerdt, Robert

    2016-01-15

    Regenerative medicine, including preclinical studies in large animal models and tissue engineering approaches as well as innovative assays for drug discovery, will require the constant supply of hPSC-derived cardiomyocytes and other functional progenies. Respective cell production processes must be robust, economically viable and ultimately GMP-compliant. Recent research has enabled transition of lab scale protocols for hPSC expansion and cardiomyogenic differentiation towards more controlled processing in industry-compatible culture platforms. Here, advanced strategies for the cultivation and differentiation of hPSCs will be reviewed by focusing on stirred bioreactor-based techniques for process upscaling. We will discuss how cardiomyocyte mass production might benefit from recent findings such as cell expansion at the cardiovascular progenitor state. Finally, remaining challenges will be highlighted, specifically regarding three dimensional (3D) hPSC suspension culture and critical safety issues ahead of clinical translation. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Electric power processing, distribution and control for advanced aerospace vehicles.

    NASA Technical Reports Server (NTRS)

    Krausz, A.; Felch, J. L.

    1972-01-01

    The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.

  12. Switched-capacitor realization of presynaptic short-term-plasticity and stop-learning synapses in 28 nm CMOS.

    PubMed

    Noack, Marko; Partzsch, Johannes; Mayr, Christian G; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene

    2015-01-01

    Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm(2) and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.

  13. Effects of Simulated Weightlessness on Mammalian Development. Part 2: Meiotic Maturation of Mouse Oocytes During Clinostat Rotation

    NASA Technical Reports Server (NTRS)

    Wolgemuth, D. J.; Grills, G. S.

    1985-01-01

    In order to understand the role of gravity in basic cellular processes that are important during development, the effects of a simulated microgravity environment on mammalian gametes and early embryos cultured in vitro are examined. A microgravity environment is simulated by use of a clinostat, which essentially reorients cells relative to the gravity vector. Initial studies have focused on assessing the effects of clinostat rotation on the meiotic progression of mouse oocytes. Modifications centered on providing the unique in vitro culture of the clinostat requirements of mammalian oocytes and embryos: 37 C temperature, constant humidity, and a 5% CO2 in air environment. The oocytes are observed under the dissecting microscope for polar body formation and gross morphological appearance. They are then processed for cytogenetic analysis.

  14. Iterative categorization (IC): a systematic technique for analysing qualitative data.

    PubMed

    Neale, Joanne

    2016-06-01

    The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  15. Renewal and change for health care executives.

    PubMed

    Burke, G C; Bice, M O

    1991-01-01

    Health care executives must consider renewal and change within their own lives if they are to breathe life into their own institutions. Yet numerous barriers to executive renewal exist, including time pressures, fatigue, cultural factors, and trustee attitudes. This essay discusses such barriers and suggests approaches that health care executives may consider for programming renewal into their careers. These include self-assessment for professional and personal goals, career or job change, process vs. outcome considerations, solitude, networking, lifelong education, surrounding oneself with change agents, business travel and sabbaticals, reading outside the field, physical exercise, mentoring, learning from failures, a sense of humor, spiritual reflection, and family and friends. Renewal is a continuous, lifelong process requiring constant learning. Individual executives would do well to develop a framework for renewal in their careers and organizations.

  16. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  17. Compensation Effect in the Electrical Conduction Process in Some Nucleic Acid Base Complexes with Proflavine Dye

    NASA Astrophysics Data System (ADS)

    Sarkar, D.; Misra, T. N.

    1988-11-01

    Compensation behaviour has been found in electrical conduction process in proflavine complexes with nucleic acid bases, guanine, adenine, uracil and thymine. At low dye concentrations these semiconducting complexes follow a three constant compensation equation σ(T){=}σ0'\\exp (E/2kT0)\\exp (-E/2kT), σ0' and T0 being constants for a specific base. The other notations have their usual meaning. Consistent values of these constants have been obtained by different experimental methods of evaluation. These results suggest that compensation effect has a physical origin.

  18. Peroxone mineralization of chemical oxygen demand for direct potable water reuse: Kinetics and process control.

    PubMed

    Wu, Tingting; Englehardt, James D

    2015-04-15

    Mineralization of organics in secondary effluent by the peroxone process was studied at a direct potable water reuse research treatment system serving an occupied four-bedroom, four bath university residence hall apartment. Organic concentrations were measured as chemical oxygen demand (COD) and kinetic runs were monitored at varying O3/H2O2 dosages and ratios. COD degradation could be accurately described as the parallel pseudo-1st order decay of rapidly and slowly-oxidizable fractions, and effluent COD was reduced to below the detection limit (<0.7 mg/L). At dosages ≥4.6 mg L(-1) h(-1), an O3/H2O2 mass ratio of 3.4-3.8, and initial COD <20 mg/L, a simple first order decay was indicated for both single-passed treated wastewater and recycled mineral water, and a relationship is proposed and demonstrated to estimate the pseudo-first order rate constant for design purposes. At this O3/H2O2 mass ratio, ORP and dissolved ozone were found to be useful process control indicators for monitoring COD mineralization in secondary effluent. Moreover, an average second order rate constant for OH oxidation of secondary effluent organics (measured as MCOD) was found to be 1.24 × 10(7) ± 0.64 × 10(7) M(-1) S(-1). The electric energy demand of the peroxone process is estimated at 1.73-2.49 kW h electric energy for removal of one log COD in 1 m(3) secondary effluent, comparable to the energy required for desalination of medium strength seawater. Advantages/disadvantages of the two processes for municipal wastewater reuse are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Calculation of the rate constant for state-selected recombination of H+O2(v) as a function of temperature and pressure

    NASA Astrophysics Data System (ADS)

    Teitelbaum, Heshel; Caridade, Pedro J. S. B.; Varandas, António J. C.

    2004-06-01

    Classical trajectory calculations using the MERCURY/VENUS code have been carried out on the H+O2 reactive system using the DMBE-IV potential energy surface. The vibrational quantum number and the temperature were selected over the ranges v=0 to 15, and T=300 to 10 000 K, respectively. All other variables were averaged. Rate constants were determined for the energy transfer process, H+O2(v)-->H+O2(v''), for the bimolecular exchange process, H+O2(v)-->OH(v')+O, and for the dissociative process, H+O2(v)-->H+O+O. The dissociative process appears to be a mere extension of the process of transferring large amounts of energy. State-to-state rate constants are given for the exchange reaction, and they are in reasonable agreement with previous results, while the energy transfer and dissociative rate constants have never been reported previously. The lifetime distributions of the HO2 complex, calculated as a function of v and temperature, were used as a basis for determining the relative contributions of various vibrational states of O2 to the thermal rate coefficients for recombination at various pressures. This novel approach, based on the complex's ability to survive until it collides in a secondary process with an inert gas, is used here for the first time. Complete falloff curves for the recombination of H+O2 are also calculated over a wide range of temperatures and pressures. The combination of the two separate studies results in pressure- and temperature-dependent rate constants for H+O2(v)(+Ar)⇄HO2(+Ar). It is found that, unlike the exchange reaction, vibrational and rotational-translational energy are liabilities in promoting recombination.

  20. Rock-weathering rates as functions of time

    USGS Publications Warehouse

    Colman, Steven M.

    1981-01-01

    The scarcity of documented numerical relations between rock weathering and time has led to a common assumption that rates of weathering are linear. This assumption has been strengthened by studies that have calculated long-term average rates. However, little theoretical or empirical evidence exists to support linear rates for most chemical-weathering processes, with the exception of congruent dissolution processes. The few previous studies of rock-weathering rates that contain quantitative documentation of the relation between chemical weathering and time suggest that the rates of most weathering processes decrease with time. Recent studies of weathering rinds on basaltic and andesitic stones in glacial deposits in the western United States also clearly demonstrate that rock-weathering processes slow with time. Some weathering processes appear to conform to exponential functions of time, such as the square-root time function for hydration of volcanic glass, which conforms to the theoretical predictions of diffusion kinetics. However, weathering of mineralogically heterogeneous rocks involves complex physical and chemical processes that generally can be expressed only empirically, commonly by way of logarithmic time functions. Incongruent dissolution and other weathering processes produce residues, which are commonly used as measures of weathering. These residues appear to slow movement of water to unaltered material and impede chemical transport away from it. If weathering residues impede weathering processes then rates of weathering and rates of residue production are inversely proportional to some function of the residue thickness. This results in simple mathematical analogs for weathering that imply nonlinear time functions. The rate of weathering becomes constant only when an equilibrium thickness of the residue is reached. Because weathering residues are relatively stable chemically, and because physical removal of residues below the ground surface is slight, many weathering features require considerable time to reach constant rates of change. For weathering rinds on volcanic stones in the western United States, this time is at least 0.5 my. ?? 1981.

  1. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    PubMed

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  2. Impact of agile methodologies on team capacity in automotive radio-navigation projects

    NASA Astrophysics Data System (ADS)

    Prostean, G.; Hutanu, A.; Volker, S.

    2017-01-01

    The development processes used in automotive radio-navigation projects are constantly under adaption pressure. While the software development models are based on automotive production processes, the integration of peripheral components into an automotive system will trigger a high number of requirement modifications. The use of traditional development models in automotive industry will bring team’s development capacity to its boundaries. The root cause lays in the inflexibility of actual processes and their adaption limits. This paper addresses a new project management approach for the development of radio-navigation projects. The understanding of weaknesses of current used models helped us in development and integration of agile methodologies in traditional development model structure. In the first part we focus on the change management methods to reduce request for change inflow. Established change management risk analysis processes enables the project management to judge the impact of a requirement change and also gives time to the project to implement some changes. However, in big automotive radio-navigation projects the saved time is not enough to implement the large amount of changes, which are submitted to the project. In the second phase of this paper we focus on increasing team capacity by integrating at critical project phases agile methodologies into the used traditional model. The overall objective of this paper is to prove the need of process adaption in order to solve project team capacity bottlenecks.

  3. Water evaporation on highly viscoelastic polymer surfaces.

    PubMed

    Pu, Gang; Severtson, Steven J

    2012-07-03

    Results are reported for a study on the evaporation of water droplets from a highly viscoelastic acrylic polymer surface. These are contrasted with those collected for the same measurements carried out on polydimethylsiloxane (PDMS). For PDMS, the evaporation process involves the expected multistep process including constant drop area, constant contact angle, and finally a combination of these steps until the liquid is gone. In contrast, water evaporation from the acrylic polymer shows a constant drop area mode throughout. Furthermore, during the evaporation process, the drop area actually expands on the acrylic polymer. The single mode evaporation process is consistent with formation of wetting structures, which cannot be propagated by the capillary forces. Expansion of the drop area is attributed to the influence of the drop capillary pressure. Furthermore, the rate of drop area expansion is shown to be dependent on the thickness of the polymer film.

  4. Nurses' reported thinking during medication administration.

    PubMed

    Eisenhauer, Laurel A; Hurley, Ann C; Dolan, Nancy

    2007-01-01

    To document nurses' reported thinking processes during medication administration before and after implementation of point-of-care technology. Semistructured interviews and real-time tape recordings were used to document the thinking processes of 40 nurses practicing in inpatient care units in a large tertiary care teaching hospital in the northeastern US. Content analysis resulted in identification of 10 descriptive categories of nurses' thinking: communication, dose-time, checking, assessment, evaluation, teaching, side effects, work arounds, anticipating problem solving, and drug administration. Situations requiring judgment in dosage, timing, or selection of specific medications (e.g., pain management, titration of antihypertensives) provided the most explicit data about nurses' use of critical thinking and clinical judgment. A key element was nurses' constant professional vigilance to ensure that patients received their appropriate medications. Nurses' thinking processes extended beyond rules and procedures and were based on patient data and interdisciplinary professional knowledge to provide safe and effective care. Identification of thinking processes can help nurses to explain the professional expertise inherent in medication administration beyond the technical application of the "5 rights."

  5. Method and apparatus for thermal processing of semiconductor substrates

    DOEpatents

    Griffiths, Stewart K.; Nilson, Robert H.; Mattson, Brad S.; Savas, Stephen E.

    2002-01-01

    An improved apparatus and method for thermal processing of semiconductor wafers. The apparatus and method provide the temperature stability and uniformity of a conventional batch furnace as well as the processing speed and reduced time-at-temperature of a lamp-heated rapid thermal processor (RTP). Individual wafers are rapidly inserted into and withdrawn from a furnace cavity held at a nearly constant and isothermal temperature. The speeds of insertion and withdrawal are sufficiently large to limit thermal stresses and thereby reduce or prevent plastic deformation of the wafer as it enters and leaves the furnace. By processing the semiconductor wafer in a substantially isothermal cavity, the wafer temperature and spatial uniformity of the wafer temperature can be ensured by measuring and controlling only temperatures of the cavity walls. Further, peak power requirements are very small compared to lamp-heated RTPs because the cavity temperature is not cycled and the thermal mass of the cavity is relatively large. Increased speeds of insertion and/or removal may also be used with non-isothermal furnaces.

  6. Method and apparatus for thermal processing of semiconductor substrates

    DOEpatents

    Griffiths, Stewart K.; Nilson, Robert H.; Mattson, Brad S.; Savas, Stephen E.

    2000-01-01

    An improved apparatus and method for thermal processing of semiconductor wafers. The apparatus and method provide the temperature stability and uniformity of a conventional batch furnace as well as the processing speed and reduced time-at-temperature of a lamp-heated rapid thermal processor (RTP). Individual wafers are rapidly inserted into and withdrawn from a furnace cavity held at a nearly constant and isothermal temperature. The speeds of insertion and withdrawal are sufficiently large to limit thermal stresses and thereby reduce or prevent plastic deformation of the wafer as it enters and leaves the furnace. By processing the semiconductor wafer in a substantially isothermal cavity, the wafer temperature and spatial uniformity of the wafer temperature can be ensured by measuring and controlling only temperatures of the cavity walls. Further, peak power requirements are very small compared to lamp-heated RTPs because the cavity temperature is not cycled and the thermal mass of the cavity is relatively large. Increased speeds of insertion and/or removal may also be used with non-isothermal furnaces.

  7. The role of shared visual information for joint action coordination.

    PubMed

    Vesper, Cordula; Schmitz, Laura; Safra, Lou; Sebanz, Natalie; Knoblich, Günther

    2016-08-01

    Previous research has identified a number of coordination processes that enable people to perform joint actions. But what determines which coordination processes joint action partners rely on in a given situation? The present study tested whether varying the shared visual information available to co-actors can trigger a shift in coordination processes. Pairs of participants performed a movement task that required them to synchronously arrive at a target from separate starting locations. When participants in a pair received only auditory feedback about the time their partner reached the target they held their movement duration constant to facilitate coordination. When they received additional visual information about each other's movements they switched to a fundamentally different coordination process, exaggerating the curvature of their movements to communicate their arrival time. These findings indicate that the availability of shared perceptual information is a major factor in determining how individuals coordinate their actions to obtain joint outcomes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. 40 CFR 1039.120 - What emission-related warranty requirements apply to me?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of operation and years, whichever comes first. You may offer an emission-related warranty more... Any speed 1,500 hours or two years, whichever comes first. Constant speed 19 ≤kW comes first. Constant speed 19 ≤kW <37 Less than 3,000 rpm 3...

  9. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  10. Effects of specific inhibitors on anammox and denitrification in marine sediments.

    PubMed

    Jensen, Marlene Mark; Thamdrup, Bo; Dalsgaard, Tage

    2007-05-01

    The effects of three metabolic inhibitors (acetylene, methanol, and allylthiourea [ATU]) on the pathways of N2 production were investigated by using short anoxic incubations of marine sediment with a 15N isotope technique. Acetylene inhibited ammonium oxidation through the anammox pathway as the oxidation rate decreased exponentially with increasing acetylene concentration; the rate decay constant was 0.10+/-0.02 microM-1, and there was 95% inhibition at approximately 30 microM. Nitrous oxide reduction, the final step of denitrification, was not sensitive to acetylene concentrations below 10 microM. However, nitrous oxide reduction was inhibited by higher concentrations, and the sensitivity was approximately one-half the sensitivity of anammox (decay constant, 0.049+/-0.004 microM-1; 95% inhibition at approximately 70 microM). Methanol specifically inhibited anammox with a decay constant of 0.79+/-0.12 mM-1, and thus 3 to 4 mM methanol was required for nearly complete inhibition. This level of methanol stimulated denitrification by approximately 50%. ATU did not have marked effects on the rates of anammox and denitrification. The profile of inhibitor effects on anammox agreed with the results of studies of the process in wastewater bioreactors, which confirmed the similarity between the anammox bacteria in bioreactors and natural environments. Acetylene and methanol can be used to separate anammox and denitrification, but the effects of these compounds on nitrification limits their use in studies of these processes in systems where nitrification is an important source of nitrate. The observed differential effects of acetylene and methanol on anammox and denitrification support our current understanding of the two main pathways of N2 production in marine sediments and the use of 15N isotope methods for their quantification.

  11. Kinetics of the oxidation of cylindrospermopsin and anatoxin-a with chlorine, monochloramine and permanganate.

    PubMed

    Rodríguez, Eva; Sordo, Ana; Metcalf, James S; Acero, Juan L

    2007-05-01

    Cyanobacteria produce toxins that may contaminate drinking water sources. Among others, the presence of the alkaloid toxins cylindrospermopsin (CYN) and anatoxin-a (ANTX) constitutes a considerable threat to human health due to the acute and chronic toxicity of these compounds. In the present study, not previously reported second-order rate constants for the reactions of CYN and ANTX with chlorine and monochloramine and of CYN with potassium permanganate were determined and the influence of pH and temperature was established for the most reactive cases. It was found that the reactivity of CYN with chlorine presents a maximum at pH 7 (rate constant of 1265 M(-1)s(-1)). However, the oxidation of CYN with chloramine and permanganate are rather slow processes, with rate constants <1 M(-1)s(-1). The first chlorination product of CYN was found to be 5-chloro-CYN (5-Cl-CYN), which reacts with chlorine 10-20 times slower than the parent compound. The reactivity of ANTX with chlorine and chloramines is also very low (k<1M(-1)s(-1)). The elimination of CYN and ANTX in surface water was also investigated. A chlorine dose of 1.5 mg l(-1) was enough to oxidize CYN almost completely. However, 3 mg l(-1) of chlorine was able to remove only 8% of ANTX, leading to a total formation of trihalomethanes (TTHM) at a concentration of 150 microg l(-1). Therefore, chlorination is a feasible option for CYN degradation during oxidation and disinfection processes but not for ANTX removal. The permanganate dose required for CYN oxidation is very high and not applicable in waterworks.

  12. Rotor Re-Design for the SSME Fuel Flowmeter

    NASA Technical Reports Server (NTRS)

    Marcu, Bogdan

    1999-01-01

    The present report describes the process of redesigning a new rotor for the SSME Fuel Flowmeter. The new design addresses the specific requirement of a lower rotor speed which would allow the SSME operation at 1 15% rated power level without reaching a blade excitation by the wakes behind the hexagonal flow straightener upstream at frequencies close to the blade natural frequency. A series of calculations combining fleet flowmeters test data, airfoil fluid dynamics and CFD simulations of flow patterns behind the flowmeter's hexagonal straightener has led to a blade twist design alpha = alpha (radius) targeting a kf constant of 0.8256. The kf constant relates the fuel volume flow to the flowmeter rotor speed, for this particular value 17685 GPM at 3650 RPM. Based on this angle distribution, two actual blade designs were developed. A first design using the same blade airfoil as the original design targeted the new kf value only. A second design using a variable blade chord length and airfoil relative thickness targeted simultaneously the new kf value and an optimum blade design destined to provide smooth and stable operation and a significant increase in the blade natural frequency associated with the first bending mode, such that a comfortable margin could be obtained at 115% RPL. The second design is a result of a concurrent engineering process, during which several iterations were made in order to achieve a targeted blade natural frequency associated with the first bending mode of 1300 Hz. Water flow tests preliminary results indicate a kf value of 0.8179 for the f-irst design, which is within 1% of the target value. The second design rotor shows a natural frequency associated with the first bending mode of 1308 Hz, and a water-flow calibration constant of kf 0.8169.

  13. Prediction of Rate Constant for Supramolecular Systems with Multiconfigurations.

    PubMed

    Guo, Tao; Li, Haiyan; Wu, Li; Guo, Zhen; Yin, Xianzhen; Wang, Caifen; Sun, Lixin; Shao, Qun; Gu, Jingkai; York, Peter; Zhang, Jiwen

    2016-02-25

    The control of supramolecular systems requires a thorough understanding of their dynamics, especially on a molecular level. It is extremely difficult to determine the thermokinetic parameters of supramolecular systems, such as drug-cyclodextrin complexes with fast association/dissociation processes by experimental techniques. In this paper, molecular modeling combined with novel mathematical relationships integrating the thermodynamic/thermokinetic parameters of a series of isomeric multiconfigurations to predict the overall parameters in a range of pH values have been employed to study supramolecular dynamics at the molecular level. A suitable form of Eyring's equation was derived and a two-stage model was introduced. The new approach enabled accurate prediction of the apparent dissociation/association (k(off)/k(on)) and unbinding/binding (k-r/kr) rate constants of the ubiquitous multiconfiguration complexes of the supramolecular system. The pyronine Y (PY) was used as a model system for the validation of the presented method. Interestingly, the predicted k(off) value ((40 ± 1) × 10(5) s(-1), 298 K) of PY is largely in agreement with that previously determined by fluorescence correlation spectroscopy ((5 ± 3) × 10(5) s(-1), 298 K). Moreover, the k(off)/k(on) and k-r/kr for flurbiprofen-β-cylcodextrin and ibuprofen-β-cyclodextrin systems were also predicted and suggested that the association processes are diffusion-controlled. The methodology is considered to be especially useful in the design and selection of excipients for a supramolecular system with preferred association and dissociation rate constants and understanding their mechanisms. It is believed that this new approach could be applicable to a wide range of ligand-receptor supramolecular systems and will surely help in understanding their complex mechanism.

  14. Enhancement of dielectric constant at percolation threshold in CaCu3 Ti4 O12 ceramic fabricated by both solid state and sol-gel process

    NASA Astrophysics Data System (ADS)

    Mukherjee, Rupam; Garcia, Lucia; Lawes, Gavin; Nadgorny, Boris

    2014-03-01

    We have investigated the large dielectric enhancement at the percolation threshold by introducing metallic RuO2 grains into a matrix of CaCu3Ti4O12 (CCTO). The intrinsic response of the pure CCTO samples prepared by solid state and sol-gel processes results in a dielectric constant on the order of 104 and 103 respectively with low loss. Scanning electron microscopy and energy dispersive x-ray spectroscopy indicate that a difference in the thickness of the copper oxide enriched grain boundary is the main reason for the different dielectric properties between these two samples. Introducing RuO2 metallic fillers in these CCTO samples yields a sharp increase of the dielectric constant at percolation threshold fc, by a factor of 6 and 3 respectively. The temperature dependence of the dielectric constant shows that the dipolar relaxation plays an important role in enhancing dielectric constant in composite systems.

  15. Temperature metrology

    NASA Astrophysics Data System (ADS)

    Fischer, J.; Fellmuth, B.

    2005-05-01

    The majority of the processes used by the manufacturing industry depend upon the accurate measurement and control of temperature. Thermal metrology is also a key factor affecting the efficiency and environmental impact of many high-energy industrial processes, the development of innovative products and the health and safety of the general population. Applications range from the processing, storage and shipment of perishable foodstuffs and biological materials to the development of more efficient and less environmentally polluting combustion processes for steel-making. Accurate measurement and control of temperature is, for instance, also important in areas such as the characterization of new materials used in the automotive, aerospace and semiconductor industries. This paper reviews the current status of temperature metrology. It starts with the determination of thermodynamic temperatures required on principle because temperature is an intensive quantity. Methods to determine thermodynamic temperatures are reviewed in detail to introduce the underlying physical basis. As these methods cannot usually be applied for practical measurements the need for a practical temperature scale for day-to-day work is motivated. The International Temperature Scale of 1990 and the Provisional Low Temperature Scale PLTS-2000 are described as important parts of the International System of Units to support science and technology. Its main importance becomes obvious in connection with industrial development and international markets. Every country is strongly interested in unique measures, in order to guarantee quality, reproducibility and functionability of products. The eventual realization of an international system, however, is only possible within the well-functioning organization of metrological laboratories. In developed countries the government established scientific institutes have certain metrological duties, as, for instance, the maintenance and dissemination of national units. For the base unit kelvin, this procedure is described in the sections on practical temperature scales, practical thermometry and reference standards. Testing experimentally the fundamental laws of physics means in practice the precise determination of the fundamental constants appearing in the laws. The essence of current activities is that prototypes, which may vary uncontrollably with time and location, are replaced by abstract experimental prescriptions that relate the units to the constants. This approach is shown for the definition of the kelvin and the Boltzmann constant. Dedicated to the occasion of the 60th birthday of Wolfgang Buck.

  16. Optimization of High-Throughput Sequencing Kinetics for determining enzymatic rate constants of thousands of RNA substrates

    PubMed Central

    Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.

    2016-01-01

    Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633

  17. Modeling Sluggishness in Binaural Unmasking of Speech for Maskers With Time-Varying Interaural Phase Differences

    PubMed Central

    Brand, Thomas

    2018-01-01

    In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism. PMID:29338577

  18. Modeling Sluggishness in Binaural Unmasking of Speech for Maskers With Time-Varying Interaural Phase Differences.

    PubMed

    Hauth, Christopher F; Brand, Thomas

    2018-01-01

    In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect "binaural sluggishness." In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization-cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.

  19. Working Memory and Aging: Separating the Effects of Content and Context

    PubMed Central

    Bopp, Kara L.; Verhaeghen, Paul

    2009-01-01

    In three experiments, we investigated the hypothesis that age-related differences in working memory might be due to the inability to bind content with context. Participants were required to find a repeating stimulus within a single series (no context memory required) or within multiple series (necessitating memory for context). Response time and accuracy were examined in two task domains: verbal and visuospatial. Binding content with context led to longer processing time and poorer accuracy in both age groups, even when working memory load was held constant. Although older adults were overall slower and less accurate than younger adults, the need for context memory did not differentially affect their performance. It is therefore unlikely that age differences in working memory are due to specific age-related problems with content-with-context binding. PMID:20025410

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krnjaic, Gordan

    In this letter, we quantify the challenge of explaining the baryon asymmetry using initial conditions in a universe that undergoes inflation. Contrary to lore, we find that such an explanation is possible if netmore » $B-L$ number is stored in a light bosonic field with hyper-Planckian initial displacement and a delicately chosen field velocity prior to inflation. However, such a construction may require extremely tuned coupling constants to ensure that this asymmetry is viably communicated to the Standard Model after reheating; the large field displacement required to overcome inflationary dilution must not induce masses for Standard Model particles or generate dangerous washout processes. While these features are inelegant, this counterexample nonetheless shows that there is no theorem against such an explanation. We also comment on potential observables in the double $$\\beta$$-decay spectrum and on model variations that may allow for more natural realizations.« less

  1. A new model of arterial hemodynamics.

    PubMed

    Branzan, M; Sundri, G

    1983-01-01

    The determination of arterial blood flow parameters on the basis of ultrasound investigation requires a new hydrodynamic model of arterial circulation. Unlike previous research (Womersley, Bergel) considering the arterial pressure of its gradients to be known, the present model uses blood flow velocity and arterial radius magnitude easily obtained by ultrasound (Doppler effect). Processing these data requires the thorough analysis of rheological characteristics of blood flow and of arterial wall behaviour (elastic deformability). It has been assumed that: a) blood is a homogeneous and isotropic fluid; b) the artery has a cylindrical symmetry of a circular cross-section at any time moment; c) the pressure in the artery cross-section is constant. Because arterial dynamics has an undulatory character the Fourier analysis of the modified Navier-Stokes equations has been used. Finally, a simplified relation for blood pressure determination has been obtained.

  2. Development status of a preprototype water electrolysis subsystem

    NASA Technical Reports Server (NTRS)

    Martin, R. B.; Erickson, A. C.

    1981-01-01

    A preprototype water electrolysis subsystem was designed and fabricated for NASA's advanced regenerative life support program. A solid polymer is used for the cell electrolyte. The electrolysis module has 12 cells that can generate 5.5 kg/day of oxygen for the metabolic requirements of three crewmembers, for cabin leakage, and for the oxygen and hydrogen required for carbon dioxide collection and reduction processes. The subsystem can be operated at a pressure between 276 and 2760 kN/sq m and in a continuous constant-current, cyclic, or standby mode. A microprocessor is used to aid in operating the subsystem. Sensors and controls provide fault detection and automatic shutdown. The results of development, demonstration, and parametric testing are presented. Modifications to enhance operation in an integrated and manned test are described. Prospective improvements for the electrolysis subsystem are discussed.

  3. The Rules and Functions of Nucleocytoplasmic Shuttling Proteins.

    PubMed

    Fu, Xuekun; Liang, Chao; Li, Fangfei; Wang, Luyao; Wu, Xiaoqiu; Lu, Aiping; Xiao, Guozhi; Zhang, Ge

    2018-05-12

    Biological macromolecules are the basis of life activities. There is a separation of spatial dimension between DNA replication and RNA biogenesis, and protein synthesis, which is an interesting phenomenon. The former occurs in the cell nucleus, while the latter in the cytoplasm. The separation requires protein to transport across the nuclear envelope to realize a variety of biological functions. Nucleocytoplasmic transport of protein including import to the nucleus and export to the cytoplasm is a complicated process that requires involvement and interaction of many proteins. In recent years, many studies have found that proteins constantly shuttle between the cytoplasm and the nucleus. These shuttling proteins play a crucial role as transport carriers and signal transduction regulators within cells. In this review, we describe the mechanism of nucleocytoplasmic transport of shuttling proteins and summarize some important diseases related shuttling proteins.

  4. QCD Axion Dark Matter with a Small Decay Constant.

    PubMed

    Co, Raymond T; Hall, Lawrence J; Harigaya, Keisuke

    2018-05-25

    The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant f_{a}∼O(10^{11})  GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires f_{a}∼(10^{8}-10^{11})  GeV. The axions may be warm enough to give deviations from cold dark matter in large scale structure.

  5. Beam shuttering interferometer and method

    DOEpatents

    Deason, V.A.; Lassahn, G.D.

    1993-07-27

    A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.

  6. Beam shuttering interferometer and method

    DOEpatents

    Deason, Vance A.; Lassahn, Gordon D.

    1993-01-01

    A method and apparatus resulting in the simplification of phase shifting interferometry by eliminating the requirement to know the phase shift between interferograms or to keep the phase shift between interferograms constant. The present invention provides a simple, inexpensive means to shutter each independent beam of the interferometer in order to facilitate the data acquisition requirements for optical interferometry and phase shifting interferometry. By eliminating the requirement to know the phase shift between interferograms or to keep the phase shift constant, a simple, economical means and apparatus for performing the technique of phase shifting interferometry is provide which, by thermally expanding a fiber optical cable changes the optical path distance of one incident beam relative to another.

  7. Ultrasonic determination of the elastic constants of the stiffness matrix for unidirectional fiberglass epoxy composites

    NASA Technical Reports Server (NTRS)

    Marques, E. R. C.; Williams, J. H., Jr.

    1986-01-01

    The elastic constants of a fiberglass epoxy unidirectional composite are determined by measuring the phase velocities of longitudinal and shear stress waves via the through transmission ultrasonic technique. The waves introduced into the composite specimens were generated by piezoceramic transducers. Geometric lengths and the times required to travel those lengths were used to calculate the phase velocities. The model of the transversely isotropic medium was adopted to relate the velocities and elastic constants.

  8. Colossal dielectric constant up to gigahertz at room temperature

    NASA Astrophysics Data System (ADS)

    Krohns, S.; Lunkenheimer, P.; Kant, Ch.; Pronin, A. V.; Brom, H. B.; Nugroho, A. A.; Diantoro, M.; Loidl, A.

    2009-03-01

    The applicability of recently discovered materials with extremely high ("colossal") dielectric constants, required for future electronics, suffers from the fact that their dielectric constant ɛ' only is huge in a limited frequency range below about 1 MHz. In the present report, we show that the dielectric properties of a charge-ordered nickelate, La15/8Sr1/8NiO4, surpass those of other materials. Especially, ɛ' retains its colossal magnitude of >10 000 well into the gigahertz range.

  9. A Theoretical Approach to the Calculation of Annealed Impurity Profiles of Ion Implanted Boron into Silicon.

    DTIC Science & Technology

    1977-06-01

    determined experimentally) and the distribution of energy deposited into nuclear processes by the boron ions. Damage is a product of this energy distri...energy deposited into nuclear processes, k is a constant adjusted to produce the total number of vacancies calculated in Fig. 11, and Tda m in the...profile computed from the energy depos- ited into nuclear processes = time constant for the release of vacancies fr( ,-, vacancy 1.- t ers C (liilibriul

  10. Experimental determination of the effective strong coupling constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandre Deur; Volker Burkert; Jian-Ping Chen

    2007-07-01

    We extract an effective strong coupling constant from low Q{sup 2} data on the Bjorken sum. Using sum rules, we establish its Q{sup 2}-behavior over the complete Q{sup 2}-range. The result is compared to effective coupling constants extracted from different processes and to calculations based on Schwinger-Dyson equations, hadron spectroscopy or lattice QCD. Although the connection between the experimentally extracted effective coupling constant and the calculations is not clear, the results agree surprisingly well.

  11. Development of an Experimental Data Base to Validate Compressor-Face Boundary Conditions Used in Unsteady Inlet Flow Computations

    NASA Technical Reports Server (NTRS)

    Sajben, Miklos; Freund, Donald D.

    1998-01-01

    The ability to predict the dynamics of integrated inlet/compressor systems is an important part of designing high-speed propulsion systems. The boundaries of the performance envelope are often defined by undesirable transient phenomena in the inlet (unstart, buzz, etc.) in response to disturbances originated either in the engine or in the atmosphere. Stability margins used to compensate for the inability to accurately predict such processes lead to weight and performance penalties, which translate into a reduction in vehicle range. The prediction of transients in an inlet/compressor system requires either the coupling of two complex, unsteady codes (one for the inlet and one for the engine) or else a reliable characterization of the inlet/compressor interface, by specifying a boundary condition. In the context of engineering development programs, only the second option is viable economically. Computations of unsteady inlet flows invariably rely on simple compressor-face boundary conditions (CFBC's). Currently, customary conditions include choked flow, constant static pressure, constant axial velocity, constant Mach number or constant mass flow per unit area. These conditions are straightforward extensions of practices that are valid for and work well with steady inlet flows. Unfortunately, it is not at all likely that any flow property would stay constant during a complex system transient. At the start of this effort, no experimental observation existed that could be used to formulate of verify any of the CFBC'S. This lack of hard information represented a risk for a development program that has been recognized to be unacceptably large. The goal of the present effort was to generate such data. Disturbances reaching the compressor face in flight may have complex spatial structures and temporal histories. Small amplitude disturbances may be decomposed into acoustic, vorticity and entropy contributions that are uncoupled if the undisturbed flow is uniform. This study is focused on the response of an inlet/compressor system to acoustic disturbances. From the viewpoint of inlet computations, acoustic disturbances are clearly the most important, since they are the only ones capable of moving upstream. Convective and entropy disturbances may also produce upstream-moving acoustic waves, but such processes are outside the scope of the present study.

  12. Advanced collapsible tank for liquid containment

    NASA Technical Reports Server (NTRS)

    Flanagan, David T.; Hopkins, Robert C.

    1993-01-01

    Tanks for bulk liquid containment will be required to support advanced planetary exploration programs. Potential applications include storage of potable, process, and waste water, and fuels and process chemicals. The launch mass and volume penalties inherent in rigid tanks suggest that collapsible tanks may be more efficient. Collapsible tanks are made of lightweight flexible material and can be folded compactly for storage and transport. Although collapsible tanks for terrestrial use are widely available, a new design was developed that has significantly less mass and bulk than existing models. Modelled after the shape of a sessible drop, this design features a dual membrane with a nearly uniform stress distribution and a low surface-to-volume ratio. It can be adapted to store a variety of liquids in nearly any environment with constant acceleration field. Three models of 10L, 50L, and 378L capacity have been constructed and tested. The 378L (100 gallon) model weighed less than 10 percent of a commercially available collapsible tank of equivalent capacity, and required less than 20 percent of the storage space when folded for transport.

  13. Cost and Precision of Brownian Clocks

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Seifert, Udo

    2016-10-01

    Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle, thus regulating some oscillatory behavior in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the nonequilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.

  14. STRESS AND FAILURE ANALYSIS OF RAPIDLY ROTATING ASTEROID (29075) 1950 DA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Masatoshi; Scheeres, Daniel J., E-mail: masatoshi.hirabayashi@colorado.edu

    Rozitis et al. recently reported that near-Earth asteroid (29075) 1950 DA, whose bulk density ranges from 1.0 g cm{sup –3} to 2.4 g cm{sup –3}, is a rubble pile and requires a cohesive strength of at least 44-76 Pa to keep from failing due to its fast spin period. Since their technique for giving failure conditions required the averaged stress over the whole volume, it discarded information about the asteroid's failure mode and internal stress condition. This paper develops a finite element model and revisits the stress and failure analysis of 1950 DA. For the modeling, we do not consider material hardening andmore » softening. Under the assumption of an associated flow rule and uniform material distribution, we identify the deformation process of 1950 DA when its constant cohesion reaches the lowest value that keeps its current shape. The results show that to avoid structural failure the internal core requires a cohesive strength of at least 75-85 Pa. It suggests that for the failure mode of this body, the internal core first fails structurally, followed by the surface region. This implies that if cohesion is constant over the whole volume, the equatorial ridge of 1950 DA results from a material flow going outward along the equatorial plane in the internal core, but not from a landslide as has been hypothesized. This has additional implications for the likely density of the interior of the body.« less

  15. Questions on universal constants and four-dimensional symmetry from a broad viewpoint. I

    NASA Technical Reports Server (NTRS)

    Hsu, J. P.

    1983-01-01

    It is demonstrated that there is a flexibility in clock synchronizations and that four-dimensional symmetry framework can be viewed broadly. The true universality of basic constants is discussed, considering a class of measurement processes based on the velocity = distance/time interval, which always yields some number when used by an observer. The four-dimensional symmetry framework based on common time for all observers is formulated, and related processes of measuring light speed are discussed. Invariant 'action functions' for physical laws in the new four-dimensional symmetry framework with the common time are established to discuss universal constants. Truly universal constants are demonstrated, and it is shown that physics in this new framework and in special relativity are equivalent as far as one-particle systems and the S-matrix in field theories are concerned.

  16. Load positioning system with gravity compensation

    NASA Technical Reports Server (NTRS)

    Hollow, R. H.

    1984-01-01

    A load positioning system with gravity compensation has a servomotor, position sensing feedback potentiometer and velocity sensing tachometer in a conventional closed loop servo arrangement to cause a lead screw and a ball nut to vertically position a load. Gravity compensating components comprise the DC motor, gears, which couple torque from the motor to the lead screw, and constant current power supply. The constant weight of the load applied to the lead screw via the ball nut tend to cause the lead screw to rotate, the constant torque of which is opposed by the constant torque produced by the motor when fed from the constant current source. The constant current is preset as required by the potentiometer to effect equilibration of the load which thereby enables the positioning servomotor to see the load as weightless under both static and dynamic conditions. Positioning acceleration and velocity performance are therefore symmetrical.

  17. General relativity with small cosmological constant from spontaneous compactification of Lovelock theory in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canfora, Fabrizio; Willison, Steven; Giacomini, Alex

    2009-08-15

    It is shown that Einstein gravity in four dimensions with small cosmological constant and small extra dimensions can be obtained by spontaneous compactification of Lovelock gravity in vacuum. Assuming that the extra dimensions are compact spaces of constant curvature, general relativity is recovered within a certain class of Lovelock theories possessing necessarily cubic or higher order terms in curvature. This bounds the higher dimension to at least 7. Remarkably, the effective gauge coupling and Newton constant in four dimensions are not proportional to the gravitational constant in higher dimensions, but are shifted with respect to their standard values. This effectmore » opens up new scenarios where a maximally symmetric solution in higher dimensions could decay into the compactified spacetime either by tunneling or through a gravitational analog of ghost condensation. Indeed, this is what occurs requiring both the extra dimensions and the four-dimensional cosmological constant to be small.« less

  18. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  19. Utilization of methanol for polymer electrolyte fuel cells in mobile systems

    NASA Astrophysics Data System (ADS)

    Schmidt, V. M.; Brockerhoff, P.; Hohlein, B.; Menzer, R.; Stimming, U.

    1994-04-01

    The constantly growing volume of road traffic requires the introduction of new vehicle propulsion systems with higher efficiency and drastically reduced emission rates. As part of the fuel cell programme of the Research Centre Julich a vehicle propulsion system with methanol as secondary energy carrier and a polymer electrolyte membrane fuel cell (PEMFC) as the main component for energy conversion is developed. The fuel gas is produced by a heterogeneously catalyzed steam reforming reaction in which methanol is converted to H2, CO and CO2. The required energy is provided by the catalytic conversion of methanol for both heating up the system and reforming methanol. The high CO content of the fuel gas requires further processing of the gas or the development of new electrocatalysts for the anode. Various Pt-Ru alloys show promising behaviour as CO-tolerant anodes. The entire fuel cell system is discussed in terms of energy and emission balances. The development of important components is described and experimental results are discussed.

  20. Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system

    NASA Astrophysics Data System (ADS)

    Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.

    2017-11-01

    Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.

  1. 40 CFR 91.421 - Dilute gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Pump—Constant Volume Sampler (PDP-CVS) system with a heat exchanger, or a Critical Flow Venturi—Constant Volume Sampler (CFV-CVS) system with CVS sample probes and/or a heat exchanger or electronic flow... sampling point. (ii) For the CFV-CVS, either a heat exchanger or electronic flow compensation is required...

  2. 40 CFR 91.421 - Dilute gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Pump—Constant Volume Sampler (PDP-CVS) system with a heat exchanger, or a Critical Flow Venturi—Constant Volume Sampler (CFV-CVS) system with CVS sample probes and/or a heat exchanger or electronic flow... sampling point. (ii) For the CFV-CVS, either a heat exchanger or electronic flow compensation is required...

  3. 40 CFR 91.421 - Dilute gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Pump—Constant Volume Sampler (PDP-CVS) system with a heat exchanger, or a Critical Flow Venturi—Constant Volume Sampler (CFV-CVS) system with CVS sample probes and/or a heat exchanger or electronic flow... sampling point. (ii) For the CFV-CVS, either a heat exchanger or electronic flow compensation is required...

  4. Epoxidation with Possibilities: Discovering Stereochemistry in Organic Chemistry via Coupling Constants

    ERIC Educational Resources Information Center

    Treadwell, Edward M.; Yan, Zhiqing; Xiao, Xiao

    2017-01-01

    A one-day laboratory epoxidation experiment, requiring no purification, is described, wherein the students are given an "unknown" stereoisomer of 3-hexen-1-ol, and use [superscript 1]H NMR coupling constants to determine the stereochemistry of their product. From this they work backward to determine the stereochemistry of their starting…

  5. Single-Specimen Technique to Establish the J-Resistance of Linear Viscoelastic Solids with Constant Poisson's Ratio

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; McCool, Alex (Technical Monitor)

    2001-01-01

    A method is developed to establish the J-resistance function for an isotropic linear viscoelastic solid of constant Poisson's ratio using the single-specimen technique with constant-rate test data. The method is based on the fact that, for a test specimen of fixed crack size under constant rate, the initiation J-integral may be established from the crack size itself, the actual external load and load-point displacement at growth initiation, and the relaxation modulus of the viscoelastic solid, without knowledge of the complete test record. Since crack size alone, of the required data, would be unknown at each point of the load-vs-load-point displacement curve of a single-specimen test, an expression is derived to estimate it. With it, the physical J-integral at each point of the test record may be established. Because of its basis on single-specimen testing, not only does the method not require the use of multiple specimens with differing initial crack sizes, but avoids the need for tracking crack growth as well.

  6. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MULKEY, C.H.

    1999-07-06

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through themore » DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues. This document does not address health or safety regulations or requirements (those of the Occupational Safety and Health Administration or the National Institute of Occupational Safety and Health) or continuous emission monitoring systems. This DQO is applicable to all equipment, facilities, and operations under the jurisdiction of RPP that emit or have the potential to emit regulated air pollutants.« less

  7. Linear Transformation of Electromagnetic Wave Beams of the Electron-Cyclotron Range in Toroidal Magnetic Configurations

    NASA Astrophysics Data System (ADS)

    Khusainov, T. A.; Shalashov, A. G.; Gospodchikov, E. D.

    2018-05-01

    The field structure of quasi-optical wave beams tunneled through the evanescence region in the vicinity of the plasma cutoff in a nonuniform magnetoactive plasma is analyzed. This problem is traditionally associated with the process of linear transformation of ordinary and extraordinary waves. An approximate analytical solution is constructed for a rather general magnetic configuration applicable to spherical tokamaks, optimized stellarators, and other magnetic confinement systems with a constant plasma density on magnetic surfaces. A general technique for calculating the transformation coefficient of a finite-aperture wave beam is proposed, and the physical conditions required for the most efficient transformation are analyzed.

  8. Theoretical study of fabrication of line-and-space patterns with 7 nm quarter-pitch using electron beam lithography with chemically amplified resist process: III. Post exposure baking on quartz substrates

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro

    2015-09-01

    Electron beam (EB) lithography is a key technology for the fabrication of photomasks for ArF immersion and extreme ultraviolet (EUV) lithography and molds for nanoimprint lithography. In this study, the temporal change in the chemical gradient of line-and-space patterns with a 7 nm quarter-pitch (7 nm space width and 21 nm line width) was calculated until it became constant, independently of postexposure baking (PEB) time, to clarify the feasibility of single nano patterning on quartz substrates using EB lithography with chemically amplified resist processes. When the quencher diffusion constant is the same as the acid diffusion constant, the maximum chemical gradient of the line-and-space pattern with a 7 nm quarter-pitch did not differ much from that with a 14 nm half-pitch under the condition described above. Also, from the viewpoint of process control, a low quencher diffusion constant is considered to be preferable for the fabrication of line-and-space patterns with a 7 nm quarter-pitch on quartz substrates.

  9. Process design of press hardening with gradient material property influence

    NASA Astrophysics Data System (ADS)

    Neugebauer, R.; Schieck, F.; Rautenstrauch, A.

    2011-05-01

    Press hardening is currently used in the production of automotive structures that require very high strength and controlled deformation during crash tests. Press hardening can achieve significant reductions of sheet thickness at constant strength and is therefore a promising technology for the production of lightweight and energy-efficient automobiles. The manganese-boron steel 22MnB5 have been implemented in sheet press hardening owing to their excellent hot formability, high hardenability, and good temperability even at low cooling rates. However, press-hardened components have shown poor ductility and cracking at relatively small strains. A possible solution to this problem is a selective increase of steel sheet ductility by press hardening process design in areas where the component is required to deform plastically during crash tests. To this end, process designers require information about microstructure and mechanical properties as a function of the wide spectrum of cooling rates and sequences and austenitizing treatment conditions that can be encountered in production environments. In the present work, a Continuous Cooling Transformation (CCT) diagram with corresponding material properties of sheet steel 22MnB5 was determined for a wide spectrum of cooling rates. Heating and cooling programs were conducted in a quenching dilatometer. Motivated by the importance of residual elasticity in crash test performance, this property was measured using a micro-bending test and the results were integrated into the CCT diagrams to complement the hardness testing results. This information is essential for the process design of press hardening of sheet components with gradient material properties.

  10. Process design of press hardening with gradient material property influence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neugebauer, R.; Professorship for Machine Tools and Forming Technology, TU Chemnitz; Schieck, F.

    Press hardening is currently used in the production of automotive structures that require very high strength and controlled deformation during crash tests. Press hardening can achieve significant reductions of sheet thickness at constant strength and is therefore a promising technology for the production of lightweight and energy-efficient automobiles. The manganese-boron steel 22MnB5 have been implemented in sheet press hardening owing to their excellent hot formability, high hardenability, and good temperability even at low cooling rates. However, press-hardened components have shown poor ductility and cracking at relatively small strains. A possible solution to this problem is a selective increase of steelmore » sheet ductility by press hardening process design in areas where the component is required to deform plastically during crash tests. To this end, process designers require information about microstructure and mechanical properties as a function of the wide spectrum of cooling rates and sequences and austenitizing treatment conditions that can be encountered in production environments. In the present work, a Continuous Cooling Transformation (CCT) diagram with corresponding material properties of sheet steel 22MnB5 was determined for a wide spectrum of cooling rates. Heating and cooling programs were conducted in a quenching dilatometer. Motivated by the importance of residual elasticity in crash test performance, this property was measured using a micro-bending test and the results were integrated into the CCT diagrams to complement the hardness testing results. This information is essential for the process design of press hardening of sheet components with gradient material properties.« less

  11. A multi-faceted approach to promote knowledge translation platforms in eastern Mediterranean countries: climate for evidence-informed policy.

    PubMed

    El-Jardali, Fadi; Ataya, Nour; Jamal, Diana; Jaafar, Maha

    2012-05-06

    Limited work has been done to promote knowledge translation (KT) in the Eastern Mediterranean Region (EMR). The objectives of this study are to: 1.assess the climate for evidence use in policy; 2.explore views and practices about current processes and weaknesses of health policymaking; 3.identify priorities including short-term requirements for policy briefs; and 4.identify country-specific requirements for establishing KT platforms. Senior policymakers, stakeholders and researchers from Algeria, Bahrain, Egypt, Iran, Jordan, Lebanon, Oman, Sudan, Syria, Tunisia, and Yemen participated in this study. Questionnaires were used to assess the climate for use of evidence and identify windows of opportunity and requirements for policy briefs and for establishing KT platforms. Current processes and weaknesses of policymaking were appraised using case study scenarios. Closed-ended questions were analyzed descriptively. Qualitative data was analyzed using thematic analysis. KT activities were not frequently undertaken by policymakers and researchers in EMR countries, research evidence about high priority policy issues was rarely made available, and interaction between policymakers and researchers was limited, and policymakers rarely identified or created places for utilizing research evidence in decision-making processes. Findings emphasized the complexity of policymaking. Donors, political regimes, economic goals and outdated laws were identified as key drivers. Lack of policymakers' abilities to think strategically, constant need to make quick decisions, limited financial resources, and lack of competent and trained human resources were suggested as main weaknesses. Despite the complexity of policymaking processes in countries from this region, the absence of a structured process for decision making, and the limited engagement of policymakers and researchers in KT activities, there are windows of opportunity for moving towards more evidence informed policymaking.

  12. A multi-faceted approach to promote knowledge translation platforms in eastern Mediterranean countries: climate for evidence-informed policy

    PubMed Central

    2012-01-01

    Objectives Limited work has been done to promote knowledge translation (KT) in the Eastern Mediterranean Region (EMR). The objectives of this study are to: 1.assess the climate for evidence use in policy; 2.explore views and practices about current processes and weaknesses of health policymaking; 3.identify priorities including short-term requirements for policy briefs; and 4.identify country-specific requirements for establishing KT platforms. Methods Senior policymakers, stakeholders and researchers from Algeria, Bahrain, Egypt, Iran, Jordan, Lebanon, Oman, Sudan, Syria, Tunisia, and Yemen participated in this study. Questionnaires were used to assess the climate for use of evidence and identify windows of opportunity and requirements for policy briefs and for establishing KT platforms. Current processes and weaknesses of policymaking were appraised using case study scenarios. Closed-ended questions were analyzed descriptively. Qualitative data was analyzed using thematic analysis. Results KT activities were not frequently undertaken by policymakers and researchers in EMR countries, research evidence about high priority policy issues was rarely made available, and interaction between policymakers and researchers was limited, and policymakers rarely identified or created places for utilizing research evidence in decision-making processes. Findings emphasized the complexity of policymaking. Donors, political regimes, economic goals and outdated laws were identified as key drivers. Lack of policymakers’ abilities to think strategically, constant need to make quick decisions, limited financial resources, and lack of competent and trained human resources were suggested as main weaknesses. Conclusion Despite the complexity of policymaking processes in countries from this region, the absence of a structured process for decision making, and the limited engagement of policymakers and researchers in KT activities, there are windows of opportunity for moving towards more evidence informed policymaking. PMID:22559007

  13. [Development and innovation of traditional Chinese medicine processing discipline and Chinese herbal pieces industry].

    PubMed

    Xiao, Yong-Qing; Li, Li; Liu, Ying; Ma, Yin-Lian; Yu, Ding-Rong

    2016-01-01

    To elucidate the key issues in the development and innovation of traditional Chinese medicine processing discipline and Chinese herbal pieces industry Chinese herbal pieces industry. According to the author's accumulated experience over years and demand of the development of the Chinese herbal pieces industry, the key issues in the development and innovation on the Chinese herbal pieces industry were summarized. According to the author, the traditional Chinese medicine processing discipline shall focus on a application basis research. The development of this discipline should be closely related to the development of Chinese herbal pieces. The traditional Chinese medicine processing discipline can be improved and its results can be transformed only if this discipline were correlated with the Chinese herbal pieces industry, matched with the development of the Chinese herbal pieces industry, and solved the problems in the development on the Chinese herbal pieces industry. The development of traditional Chinese medicine processing discipline and the Chinese herbal pieces industry also requires scientific researchers to make constant innovations, realize the specialty of the researches, and innovate based on inheritance. Copyright© by the Chinese Pharmaceutical Association.

  14. Effects of superheated steam on Geobacillus stearothermophilus spore viability.

    PubMed

    Head, D S; Cenkowski, S; Holley, R; Blank, G

    2008-04-01

    To examine the effect of processing with superheated steam (SS) on Geobacillus stearothermophilus ATCC 10149 spores. Two inoculum levels of spores of G. stearothermophilus were mixed with sterile sand and exposed to SS at 105-175 degrees C. The decimal reduction time (D-value) and the thermal resistance constant (z-value) were calculated. The effect of cooling of spores between periods of exposure to SS was also examined. A mean z-value of 25.4 degrees C was calculated for both inoculum levels for SS processing temperatures between 130 degrees C and 175 degrees C. Spore response to SS treatment depends on inoculum size. SS treatment may be effective for reduction in viability of thermally resistant bacterial spores provided treatments are separated by intermittent cooling periods. There is a need for technologies that require short thermal processing times to eliminate bacterial spores in foods. The SS processing technique has the potential to reduce microbial load and to modify food texture with less energy in comparison to commonly used hot air treatment. This work provides information on the effect of SS processing parameters on the viability of G. stearothermophilus spores.

  15. Design of single-winding energy-storage reactors for dc-to-dc converters using air-gapped magnetic-core structures

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.

    1977-01-01

    A procedure is presented for designing air-gapped energy-storage reactors for nine different dc-to-dc converters resulting from combinations of three single-winding power stages for voltage stepup, current stepup and voltage stepup/current stepup and three controllers with control laws that impose constant-frequency, constant transistor on-time and constant transistor off-time operation. The analysis, based on the energy-transfer requirement of the reactor, leads to a simple relationship for the required minimum volume of the air gap. Determination of this minimum air gap volume then permits the selection of either an air gap or a cross-sectional core area. Having picked one parameter, the minimum value of the other immediately leads to selection of the physical magnetic structure. Other analytically derived equations are used to obtain values for the required turns, the inductance, and the maximum rms winding current. The design procedure is applicable to a wide range of magnetic material characteristics and physical configurations for the air-gapped magnetic structure.

  16. Si Lattice, Avogadro Constant, and X- and Gamma-Ray Measurements: Contributions by R.D. Deslattes

    NASA Astrophysics Data System (ADS)

    Kessler, Jr.

    2002-04-01

    The achievement of x-ray interferometry in 1965 opened the possibility of more accurately measuring the lattice spacing of a diffraction crystal on a scale directly tied to the SI system of units. The road from the possible to reality required moving objects and measuring translations with sub-atomic accuracy. The improved crystal lattice spacing determinations had a significant impact on two fundamental measurement areas: 1) the amount of substance (the mole and the associated Avogadro Constant), and 2) short wavelengths (the x- and gamma-ray regions). Progress in both areas required additional metrological advances: density and isotopic abundance measurements are needed for the Avogadro constant and small angle measurements are required for the determination of short wavelengths. The x- and gamma-ray measurements have led to more accurate wavelength standards and neutron binding energy measurements that connect gamma-ray measurements to precision atomic mass measurements, particularly the neutron mass. Richard D. Deslattes devoted much of his scientific career to this measurement program. His outstanding contributions and insights will be reviewed.

  17. Separation of Gadolinium (Gd) using Synergic Solvent Mixed Topo-D2EHPA with Extraction Method.

    NASA Astrophysics Data System (ADS)

    Effendy, N.; Basuki, K. T.; Biyantoro, D.; Perwira, N. K.

    2018-04-01

    The main problem to obtain Gd with high purity is the similarity of chemical properties and physical properties with the other rare earth elements (REE) such as Y and Dy, it is necessary to do separation by the extraction process. The purpose of this research to determine the best solvent type, amount of solvent, feed and solvent ratio in the Gd extraction process, to determine the rate order and the value of the rate constant of Gd concentration based on experimental data of aqueous phase concentration as a function of time and to know the effect of temperature on the reaction speed constant. This research was conducted on variation of solvent, amount of solvent, feed and solvent ratio in the extraction process of Gd separation, extraction time to determine the order value and the rate constant of Gd concentration in extraction process based on the aqueous phase concentration data as a function of time, to the rate constant of decreasing concentration of Gd. Based on the calculation results, the solvent composition was obtained with the best feed to separate the rare earth elements Gd in the extraction process is 1 : 4 with 15% concentration of TOPO and 10% concentration of D2EHPA. The separation process of Gd using extraction method by solvent TOPO-D2EHPA 2 : 1 comparison is better than single solvent D2EHPA / TOPO because of the synergistic effect. The rate order of separation process of Gd follows order 1. The Arrhenius Gd equation becomes k = 1.46 x 10-7 exp (-6.96 kcal / mol / RT).

  18. Prolonged Perceptual Learning of Positional Acuity in Adult Amblyopia

    PubMed Central

    Li, Roger W; Klein, Stanley A; Levi, Dennis M

    2009-01-01

    Amblyopia is a developmental abnormality that results in physiological alterations in the visual cortex and impairs form vision. It is often successfully treated by patching the sound eye in infants and young children, but is generally considered to be untreatable in adults. However, a number of recent studies suggest that repetitive practice of a visual task using the amblyopic eye results in improved performance in both children and adults with amblyopia. These perceptual learning studies have used relatively brief periods of practice; however, clinical studies have shown that the time-constant for successful patching is long. The time-constant for perceptual learning in amblyopia is still unknown. Here we show that the time-constant for perceptual learning depends on the degree of amblyopia. Severe amblyopia requires more than 50 hours (≈35,000 trials) to reach plateau, yielding as much as a five-fold improvement in performance at a rate of ≈1.5% per hour. There is significant transfer of learning from the amblyopic to the dominant eye, suggesting that the learning reflects alterations in higher decision stages of processing. Using a reverse correlation technique, we document, for the first time, a dynamic retuning of the amblyopic perceptual decision template and a substantial reduction in internal spatial distortion. These results show that the mature amblyopic brain is surprisingly malleable, and point to more intensive treatment methods for amblyopia. PMID:19109504

  19. Active cell mechanics: Measurement and theory.

    PubMed

    Ahmed, Wylie W; Fodor, Étienne; Betz, Timo

    2015-11-01

    Living cells are active mechanical systems that are able to generate forces. Their structure and shape are primarily determined by biopolymer filaments and molecular motors that form the cytoskeleton. Active force generation requires constant consumption of energy to maintain the nonequilibrium activity to drive organization and transport processes necessary for their function. To understand this activity it is necessary to develop new approaches to probe the underlying physical processes. Active cell mechanics incorporates active molecular-scale force generation into the traditional framework of mechanics of materials. This review highlights recent experimental and theoretical developments towards understanding active cell mechanics. We focus primarily on intracellular mechanical measurements and theoretical advances utilizing the Langevin framework. These developing approaches allow a quantitative understanding of nonequilibrium mechanical activity in living cells. This article is part of a Special Issue entitled: Mechanobiology. Copyright © 2015. Published by Elsevier B.V.

  20. Effect of fabrication process on physical and mechanical properties of tungsten carbide - cobalt composite: A review

    NASA Astrophysics Data System (ADS)

    Mahaidin, Ahmad Aswad; Jaafar, Talib Ria; Selamat, Mohd Asri; Budin, Salina; Sulaiman, Zaim Syazwan; Hamid, Mohamad Hasnan Abdul

    2017-12-01

    WC-Co, which is also known as cemented carbide, is widely used in metal cutting industry and wear related application due to their excellent mechanical properties. Manufacturing industries are focusing on improving productivity and reducing operational cost with machining operation is considered as one of the factors. Thus, machining conditions are becoming more severe and required better cutting tool bit with improved mechanical properties to withstand high temperature operation. Numerous studies have been made over the generation for further improvement of cemented carbide properties to meet the constant increase in demand. However, the results of these studies vary due to different process parameters and manufacturing technology. This paper summarizes the studies to improve the properties of WC-Co composite using different consolidation (powder size, mixing method, formulation, etc) and sintering parameters (temperature, time, atmosphere, etc).

  1. 40 CFR 57.301 - General requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... maintained at all times. The NSO shall require the following gas streams to be treated by interim constant... sintering machine and any other sinter gases which are recirculated; (c) In zinc smelters, off-gases from...

  2. 28 CFR 570.44 - Supervision and restraint requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Supervision and restraint requirements... PROGRAMS AND RELEASE COMMUNITY PROGRAMS Escorted Trips § 570.44 Supervision and restraint requirements. Inmates under escort will be within the constant and immediate visual supervision of escorting staff at...

  3. 28 CFR 570.44 - Supervision and restraint requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Supervision and restraint requirements... PROGRAMS AND RELEASE COMMUNITY PROGRAMS Escorted Trips § 570.44 Supervision and restraint requirements. Inmates under escort will be within the constant and immediate visual supervision of escorting staff at...

  4. 28 CFR 570.44 - Supervision and restraint requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Supervision and restraint requirements... PROGRAMS AND RELEASE COMMUNITY PROGRAMS Escorted Trips § 570.44 Supervision and restraint requirements. Inmates under escort will be within the constant and immediate visual supervision of escorting staff at...

  5. 28 CFR 570.44 - Supervision and restraint requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Supervision and restraint requirements... PROGRAMS AND RELEASE COMMUNITY PROGRAMS Escorted Trips § 570.44 Supervision and restraint requirements. Inmates under escort will be within the constant and immediate visual supervision of escorting staff at...

  6. 28 CFR 570.44 - Supervision and restraint requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Supervision and restraint requirements... PROGRAMS AND RELEASE COMMUNITY PROGRAMS Escorted Trips § 570.44 Supervision and restraint requirements. Inmates under escort will be within the constant and immediate visual supervision of escorting staff at...

  7. Binding constants of membrane-anchored receptors and ligands depend strongly on the nanoscale roughness of membranes.

    PubMed

    Hu, Jinglei; Lipowsky, Reinhard; Weikl, Thomas R

    2013-09-17

    Cell adhesion and the adhesion of vesicles to the membranes of cells or organelles are pivotal for immune responses, tissue formation, and cell signaling. The adhesion processes depend sensitively on the binding constant of the membrane-anchored receptor and ligand proteins that mediate adhesion, but this constant is difficult to measure in experiments. We have investigated the binding of membrane-anchored receptor and ligand proteins with molecular dynamics simulations. We find that the binding constant of the anchored proteins strongly decreases with the membrane roughness caused by thermally excited membrane shape fluctuations on nanoscales. We present a theory that explains the roughness dependence of the binding constant for the anchored proteins from membrane confinement and that relates this constant to the binding constant of soluble proteins without membrane anchors. Because the binding constant of soluble proteins is readily accessible in experiments, our results provide a useful route to compute the binding constant of membrane-anchored receptor and ligand proteins.

  8. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    NASA Astrophysics Data System (ADS)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the improved GA has superior performance when used in solving a lot-bucket MRP problem.

  9. Numerical analysis and experimental verification of elastomer bending process with different material models

    NASA Astrophysics Data System (ADS)

    Kut, Stanislaw; Ryzinska, Grazyna; Niedzialek, Bernadetta

    2016-01-01

    The article presents the results of tests in order to verifying the effectiveness of the nine selected elastomeric material models (Neo-Hookean, Mooney with two and three constants, Signorini, Yeoh, Ogden, Arruda-Boyce, Gent and Marlow), which the material constants were determined in one material test - the uniaxial tension testing. The convergence assessment of nine analyzed models were made on the basis of their performance from an experimental bending test of the elastomer samples from the results of numerical calculations FEM for each material models. To calculate the material constants for the analyzed materials, a model has been generated by the stressstrain characteristics created as a result of experimental uniaxial tensile test with elastomeric dumbbell samples, taking into account the parameters received in its 18th cycle. Using such a calculated material constants numerical simulation of the bending process of a elastomeric, parallelepipedic sampleswere carried out using MARC / Mentat program.

  10. Cl(-) concentration dependence of photovoltage generation by halorhodopsin from Halobacterium salinarum.

    PubMed Central

    Muneyuki, Eiro; Shibazaki, Chie; Wada, Yoichiro; Yakushizin, Manabu; Ohtani, Hiroyuki

    2002-01-01

    The photovoltage generation by halorhodopsin from Halobacterium salinarum (shR) was examined by adsorbing shR-containing membranes onto a thin polymer film. The photovoltage consisted of two major components: one with a sub-millisecond range time constant and the other with a millisecond range time constant with different amplitudes, as previously reported. These components exhibited different Cl(-) concentration dependencies (0.1-9 M). We found that the time constant for the fast component was relatively independent of the Cl(-) concentration, whereas the time constant for the slow component increased sigmoidally at higher Cl(-) concentrations. The fast and the slow processes were attributed to charge (Cl(-)) movements within the protein and related to Cl(-) ejection, respectively. The laser photolysis studies of shR-membrane suspensions revealed that they corresponded to the formation and the decay of the N intermediate. The photovoltage amplitude of the slow component exhibited a distorted bell-shaped Cl(-) concentration dependence, and the Cl(-) concentration dependence of its time constant suggested a weak and highly cooperative Cl(-)-binding site(s) on the cytoplasmic side (apparent K(D) of approximately 5 M and Hill coefficient > or =5). The Cl(-) concentration dependence of the photovoltage amplitude and the time constant for the slow process suggested a competition between spontaneous relaxation and ion translocation. The time constant for the relaxation was estimated to be >100 ms. PMID:12324398

  11. SSEM: A model for simulating runoff and erosion of saline-sodic soil slopes under coastal reclamation

    NASA Astrophysics Data System (ADS)

    Liu, Dongdong; She, Dongli

    2018-06-01

    Current physically based erosion models do not carefully consider the dynamic variations of soil properties during rainfall and are unable to simulate saline-sodic soil slope erosion processes. The aim of this work was to build upon a complete model framework, SSEM, to simulate runoff and erosion processes for saline-sodic soils by coupling dynamic saturated hydraulic conductivity Ks and soil erodibility Kτ. Sixty rainfall simulation rainfall experiments (2 soil textures × 5 sodicity levels × 2 slope gradients × 3 duplicates) provided data for model calibration and validation. SSEM worked very well for simulating the runoff and erosion processes of saline-sodic silty clay. The runoff and erosion processes of saline-sodic silt loam were more complex than those of non-saline soils or soils with higher clay contents; thus, SSEM did not perform very well for some validation events. We further examined the model performances of four concepts: Dynamic Ks and Kτ (Case 1, SSEM), Dynamic Ks and Constant Kτ (Case 2), Constant Ks and Dynamic Kτ (Case 3) and Constant Ks and Constant Kτ (Case 4). The results demonstrated that the model, which considers dynamic variations in soil saturated hydraulic conductivity and soil erodibility, can provide more reasonable runoff and erosion prediction results for saline-sodic soils.

  12. Some remarks on anthropic approaches to the strong CP problem

    NASA Astrophysics Data System (ADS)

    Dine, Michael; Haskins, Laurel Stephenson; Ubaldi, Lorenzo; Xu, Di

    2018-05-01

    The peculiar value of θ is a challenge to the notion of an anthropic landscape. We briefly review the possibility that a suitable axion might arise from an anthropic requirement of dark matter. We then consider an alternative suggestion of Kaloper and Terning that θ might be correlated with the cosmological constant. We note that in a landscape one expects that θ is determined by the expectation value of one or more axions. We discuss how a discretuum of values of θ might arise with an energy distribution dominated by QCD, and find the requirements to be quite stringent. Given such a discretuum, we find no circumstances where small θ might be selected by anthropic requirements on the cosmological constant.

  13. Printing Outside the Box: Additive Manufacturing Processes for Fabrication of Large Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Peters, Warren

    2015-01-01

    To achieve NASA's mission of space exploration, innovative manufacturing processes are being applied to the fabrication of propulsion elements. Liquid rocket engines (LREs) are comprised of a thrust chamber and nozzle extension as illustrated in figure 1 for the J2X upper stage engine. Development of the J2X engine, designed for the Ares I launch vehicle, is currently being incorporated on the Space Launch System. A nozzle extension is attached to the combustion chamber to obtain the expansion ratio needed to increase specific impulse. If the nozzle extension could be printed as one piece using free-form additive manufacturing (AM) processes, rather than the current method of forming welded parts, a considerable time savings could be realized. Not only would this provide a more homogenous microstructure than a welded structure, but could also greatly shorten the overall fabrication time. The main objective of this study is to fabricate test specimens using a pulsed arc source and solid wire as shown in figure 2. The mechanical properties of these specimens will be compared with those fabricated using the powder bed, selective laser melting technology at NASA Marshall Space Flight Center. As printed components become larger, maintaining a constant temperature during the build process becomes critical. This predictive capability will require modeling of the moving heat source as illustrated in figure 3. Predictive understanding of the heat profile will allow a constant temperature to be maintained as a function of height from substrate while printing complex shapes. In addition, to avoid slumping, this will also allow better control of the microstructural development and hence the properties. Figure 4 shows a preliminary comparison of the mechanical properties obtained.

  14. Catalyst and process development for synthesis gas conversion to isobutylene. Final report, September 1, 1990--January 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, R.G.; Akgerman, A.

    1994-05-06

    Previous work on isosynthesis (conversion of synthesis gas to isobutane and isobutylene) was performed at very low conversions or extreme process conditions. The objectives of this research were (1) determine the optimum process conditions for isosynthesis; (2) determine the optimum catalyst preparation method and catalyst composition/properties for isosynthesis; (3) determine the kinetics for the best catalyst; (4) develop reactor models for trickle bed, slurry, and fixed bed reactors; and (5) simulate the performance of fixed bed trickle flow reactors, slurry flow reactors, and fixed bed gas phase reactors for isosynthesis. More improvement in catalyst activity and selectivity is needed beforemore » isosynthesis can become a commercially feasible (stand-alone) process. Catalysts prepared by the precipitation method show the most promise for future development as compared with those prepared hydrothermally, by calcining zirconyl nitrate, or by a modified sol-gel method. For current catalysts the high temperatures (>673 K) required for activity also cause the production of methane (because of thermodynamics). A catalyst with higher activity at lower temperatures would magnify the unique selectivity of zirconia for isobutylene. Perhaps with a more active catalyst and acidification, oxygenate production could be limited at lower temperatures. Pressures above 50 atm cause an undesirable shift in product distribution toward heavier hydrocarbons. A model was developed that can predict carbon monoxide conversion an product distribution. The rate equation for carbon monoxide conversion contains only a rate constant and an adsorption equilibrium constant. The product distribution was predicted using a simple ratio of the rate of CO conversion. This report is divided into Introduction, Experimental, and Results and Discussion sections.« less

  15. NASA Data Evaluation (2015): Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies

    NASA Astrophysics Data System (ADS)

    Burkholder, J. B.; Sander, S. P.; Abbatt, J.; Barker, J. R.; Huie, R. E.; Kolb, C. E., Jr.; Kurylo, M. J., III; Orkin, V. L.; Wilmouth, D. M.; Wine, P. H.

    2015-12-01

    Atmospheric chemistry models must include a large number of processes to accurately describe the temporal and spatial behavior of atmospheric composition. They require a wide range of chemical and physical data (parameters) that describe elementary gas-phase and heterogeneous processes. The review and evaluation of chemical and physical data has, therefore, played an important role in the development of chemical models and in their use in environmental assessment activities. The NASA data panel evaluation has a broad atmospheric focus that includes Ox, O(1D), singlet O2, HOx, NOx, Organic, FOx, ClOx, BrOx, IOx, SOx, and Na reactions, three-body reactions, equilibrium constants, photochemistry, Henry's Law coefficients, aqueous chemistry, heterogeneous chemistry and processes, and thermodynamic parameters. The 2015 evaluation includes critical coverage of ~700 bimolecular reactions, 86 three-body reactions, 33 equilibrium constants, ~220 photochemical species, ~360 aqueous and heterogeneous processes, and thermodynamic parameters for ~800 species with over 5000 literature citations reviewed. Each evaluation includes (1) recommended values (e.g. rate coefficients, absorption cross sections, solubilities, and uptake coefficients) with estimated uncertainty factors and (2) a note describing the available experimental and theoretical data and an explanation for the recommendation. This presentation highlights some of the recent additions to the evaluation that include: (1) expansion of thermochemical parameters, including Hg species, (2) CH2OO (Criegee) chemistry, (3) Isoprene and its major degradation product chemistry, (4) halocarbon chemistry, (5) Henry's law solubility data, and (6) uptake coefficients. In addition, a listing of complete references with the evaluation notes has been implemented. Users of the data evaluation are encouraged to suggest potential improvements and ways that the evaluation can better serve the atmospheric chemistry community.

  16. Improvement of Functional Properties by Sever Plastic Deformation on Parts of Titanium Biomaterials

    NASA Astrophysics Data System (ADS)

    Czán, Andrej; Babík, Ondrej; Daniš, Igor; Martikáň, Pavol; Czánová, Tatiana

    2017-12-01

    Main task of materials for invasive implantology is their biocompatibility with the tissue but also requirements for improving the functional properties of given materials are increasing constantly. One of problems of materials biocompatibility is the impossibility to improve of functional properties by change the percentage of the chemical elements and so it is necessary to find other innovative methods of improving of functional properties such as mechanical action in the form of high deformation process. This paper is focused on various methods of high deformation process such as Equal Channel Angular Pressing (ECAP) when rods with record strength properties were obtained.The actual studies of the deformation process properties as tri-axial compress stress acting on workpiece with high speed of deformation shows effects similar to results obtained using the other methods, but in lower levels of stress. Hydrostatic extrusion (HE) is applying for the purpose of refining the structure of the commercially pure titanium up to nano-scale. Experiments showed the ability to reduce the grain size below 100 nm. Due to the significant change in the performance of the titanium materials by severe plastic deformation is required to identify the processability of materials with respect to the identification of created surfaces and monitoring the surface integrity, where the experimental results show ability of SPD technologies application on biomaterials.

  17. The peats of Costa Rica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thayer, G.R.; Williamson, K.D. Jr.; Ramirez, O.

    The authors compare the competitive position of peat for energy with coal, oil, and cogenerative systems in gasifiers and solid-fuel boilers. They also explore the possibility for peat use in industry. To identify the major factors, they analyze costs using a Los Alamos levelized cost code, and they study parametric costs, comparing peat production in constant dollars with interest rates and return on investment. They consider costs of processing plant construction, sizes and kinds of boilers, retrofitting, peat drying, and mining methods. They examine mining requirements for Moin, Changuinola, and El Cairo and review wet mining and dewatering methods. Peatmore » can, indeed, be competitive with other energy sources, but this depends on the ratio of fuel costs to boiler costs. This ratio is nearly constant in comparison with cogeneration in a steam-only production system. For grate boilers using Costa Rican high-ash peat, and for small nonautomatic boilers now used in Costa Rica, the authors recommend combustion tests. An appendix contains a preliminary mining plan and cost estimate for the El Cairo peat deposit. 8 refs., 43 figs., 19 tabs.« less

  18. Dynamics of Singlet Fission and Electron Injection in Self-Assembled Acene Monolayers on Titanium Dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Justin C; Pace, Natalie A; Arias, Dylan H

    We employ a combination of linear spectroscopy, electrochemistry, and transient absorption spectroscopy to characterize the interplay between electron transfer and singlet fission dynamics in polyacene-based dyes attached to nanostructured TiO2. For triisopropyl silylethynyl (TIPS)-pentacene, we find that the singlet fission time constant increases to 6.5 ps on a nanostructured TiO2 surface relative to a thin film time constant of 150 fs, and that triplets do not dissociate after they are formed. In contrast, TIPS-tetracene singlets quickly dissociate in 2 ps at the molecule/TiO2 interface, and this dissociation outcompetes the relatively slow singlet fission process. The addition of an alumina layermore » slows down electron injection, allowing the formation of triplets from singlet fission in 40 ps. However, the triplets do not inject electrons, which is likely due to a lack of sufficient driving force for triplet dissociation. These results point to the critical balance required between efficient singlet fission and appropriate energetics for interfacial charge transfer.« less

  19. Microgravity combustion of dust suspensions

    NASA Technical Reports Server (NTRS)

    Lee, John H. S.; Peraldi, Olivier; Knystautas, Rom

    1993-01-01

    Unlike the combustion of homogeneous gas mixtures, there are practically no reliable fundamental data (i.e., laminar burning velocity, flammability limits, quenching distance, minimum ignition energy) for the combustion of heterogeneous dust suspensions. Even the equilibrium thermodynamic data such as the constant pressure volume combustion pressure and the constant pressure adiabatic flame temperature are not accurately known for dust mixtures. This is mainly due to the problem of gravity sedimentation. In normal gravity, turbulence, convective flow, electric and acoustic fields are required to maintain a dust in suspension. These external influences have a dominating effect on the combustion processes. Microgravity offers a unique environment where a quiescent dust cloud can in principle be maintained for a sufficiently long duration for almost all combustion experiments (dust suspensions are inherently unstable due to Brownian motion and particle aggregation). Thus, the microgravity duration provided by drop towers, parabolic flights, and the space shuttle, can all be exploited for different kinds of dust combustion experiments. The present paper describes some recent studies on microgravity combustion of dust suspension carried out on the KC-135 and the Caravelle aircraft. The results reported are obtained from three parabolic flight campaigns.

  20. A Novel, Real-Valued Genetic Algorithm for Optimizing Radar Absorbing Materials

    NASA Technical Reports Server (NTRS)

    Hall, John Michael

    2004-01-01

    A novel, real-valued Genetic Algorithm (GA) was designed and implemented to minimize the reflectivity and/or transmissivity of an arbitrary number of homogeneous, lossy dielectric or magnetic layers of arbitrary thickness positioned at either the center of an infinitely long rectangular waveguide, or adjacent to the perfectly conducting backplate of a semi-infinite, shorted-out rectangular waveguide. Evolutionary processes extract the optimal physioelectric constants falling within specified constraints which minimize reflection and/or transmission over the frequency band of interest. This GA extracted the unphysical dielectric and magnetic constants of three layers of fictitious material placed adjacent to the conducting backplate of a shorted-out waveguide such that the reflectivity of the configuration was 55 dB or less over the entire X-band. Examples of the optimization of realistic multi-layer absorbers are also presented. Although typical Genetic Algorithms require populations of many thousands in order to function properly and obtain correct results, verified correct results were obtained for all test cases using this GA with a population of only four.

  1. Design and numerical investigations of a counter-rotating axial compressor for a geothermal power plant application

    NASA Astrophysics Data System (ADS)

    Qualman, Thomas, II

    Geothermal provides a steady source of energy unlike other renewable sources, however, there are non-condensable gases (NCG's) that need to be removed before the steam enters the turbine/generator or the efficiency suffers. By utilizing a multistage counter-rotating axial compressor with integrated composite wound impellers the process of removing NCG's could be significantly improved. The novel composite impeller design provides a high level of corrosion resistance, a good strength to weight ratio, reduced size, and reduced manufacturing and maintenance costs. This thesis focuses on the design of the first 3 stages of a multistage counter-rotating axial compressor with integrated composite wound impellers for NCG removal. Because of the novel technique, an unusual set of constraints required a simplified 1 and 2D design methodology to be developed and investigated through CFD. The results indicate that by utilizing constant thickness blades with constant shroud radius (to ease manufacturing difficulties) a total pressure ratio of 1.37 with a total polytropic efficiency of 89.81% could be achieved.

  2. Cocrystals to facilitate delivery of poorly soluble compounds beyond-rule-of-5.

    PubMed

    Kuminek, Gislaine; Cao, Fengjuan; Bahia de Oliveira da Rocha, Alanny; Gonçalves Cardoso, Simone; Rodríguez-Hornedo, Naír

    2016-06-01

    Besides enhancing aqueous solubilities, cocrystals have the ability to fine-tune solubility advantage over drug, supersaturation index, and bioavailability. This review presents important facts about cocrystals that set them apart from other solid-state forms of drugs, and a quantitative set of rules for the selection of additives and solution/formulation conditions that predict cocrystal solubility, supersaturation index, and transition points. Cocrystal eutectic constants are shown to be the most important cocrystal property that can be measured once a cocrystal is discovered, and simple relationships are presented that allow for prediction of cocrystal behavior as a function of pH and drug solubilizing agents. Cocrystal eutectic constant is a stability or supersatuation index that: (a) reflects how close or far from equilibrium a cocrystal is, (b) establishes transition points, and (c) provides a quantitative scale of cocrystal true solubility changes over drug. The benefit of this strategy is that a single measurement, that requires little material and time, provides a principled basis to tailor cocrystal supersaturation index by the rational selection of cocrystal formulation, dissolution, and processing conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A pilot plant study using conventional and advanced water treatment processes: Evaluating removal efficiency of indicator compounds representative of pharmaceuticals and personal care products.

    PubMed

    Zhang, Shuangyi; Gitungo, Stephen; Axe, Lisa; Dyksen, John E; Raczko, Robert F

    2016-11-15

    With widespread occurrence of pharmaceuticals and personal care products (PPCPs) in the water cycle, their presence in source water has led to the need to better understand their treatability and removal efficiency in treatment processes. Fifteen indicator compounds were identified to represent the large number of PPCPs reported worldwide. Criteria applied to determine the indicator compounds included PPCPs widely used, observed at great frequency in aqueous systems, resistant to treatment, persistent in the environment, and representative of classes of organics. Through a pilot plant investigation to understand the optimal combination of unit process for treating PPCPs, 12 treatment trains with their additive and synergistic contributions were investigated; processes included dissolved air flotation (DAF), pre- and intermediate-ozonation with and without H 2 O 2 , intermediate chlorination, dual media filtration, granular activated carbon (GAC), and UV/H 2 O 2 . Treatment trains that achieved the greatest removals involved 1. DAF followed by intermediate ozonation, dual media filtration, and virgin GAC; 2. pre-ozonation followed by DAF, dual media filtration, and virgin GAC; and, 3. DAF (with either pre- or intermediate oxidation) followed by dual media filtration and UV/H 2 O 2 . Results revealed significant removal efficiencies for virgin GAC (preceded by DAF and intermediate ozonation) and UV/H 2 O 2 with an intensity of 700 mJ/cm 2 , where more than 12 of the compounds were removed by greater than 90%. Reduced PPCP removals were observed with virgin GAC preceded by pre-ozonation and DAF. Intermediate ozonation was more effective than using pre-ozonation, demonstrating the importance of this process targeting PPCPs after treatment of natural organic matter. Removal efficiencies of indicator compounds through ozonation were found to be a function of the O 3 rate constants (k O3 ). For compounds with low O 3 rate constants (k O3  < 10 M -1 s -1 ), H 2 O 2 addition in the O 3 reactor was required. Of the 15 indicator compounds, tri(2-chloroethyl) phosphate (TCEP) and cotinine were observed to be the most recalcitrant. Although UV/H 2 O 2 with elevated intensity (700 mJ/cm 2 ) was effective for PPCP removals, energy requirements far exceed intensities applied for disinfection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Mature clustered, regularly interspaced, short palindromic repeats RNA (crRNA) length is measured by a ruler mechanism anchored at the precursor processing site.

    PubMed

    Hatoum-Aslan, Asma; Maniv, Inbal; Marraffini, Luciano A

    2011-12-27

    Precise RNA processing is fundamental to all small RNA-mediated interference pathways. In prokaryotes, clustered, regularly interspaced, short palindromic repeats (CRISPR) loci encode small CRISPR RNAs (crRNAs) that protect against invasive genetic elements by antisense targeting. CRISPR loci are transcribed as a long precursor that is cleaved within repeat sequences by CRISPR-associated (Cas) proteins. In many organisms, this primary processing generates crRNA intermediates that are subject to additional nucleolytic trimming to render mature crRNAs of specific lengths. The molecular mechanisms underlying this maturation event remain poorly understood. Here, we defined the genetic requirements for crRNA primary processing and maturation in Staphylococcus epidermidis. We show that changes in the position of the primary processing site result in extended or diminished maturation to generate mature crRNAs of constant length. These results indicate that crRNA maturation occurs by a ruler mechanism anchored at the primary processing site. We also show that maturation is mediated by specific cas genes distinct from those genes involved in primary processing, showing that this event is directed by CRISPR/Cas loci.

  5. Analysis of water microdroplet condensation on silicon surfaces

    NASA Astrophysics Data System (ADS)

    Honda, Takuya; Fujimoto, Kenya; Yoshimoto, Yuta; Mogi, Katsuo; Kinefuchi, Ikuya; Sugii, Yasuhiko; Takagi, Shu; Univ. of Tokyo Team; Tokyo Inst. of Tech. Team

    2016-11-01

    We observed the condensation process of water microdroplets on flat silicon (100) surfaces by means of the sequential visualization of the droplets using an environmental scanning electron microscope. As previously reported for nanostructured surfaces, the condensation process of water microdroplets on the flat silicon surfaces also exhibits two modes: the constant base (CB) area mode and the constant contact angle (CCA) mode. In the CB mode, the contact angle increases with time while the base diameter is constant. Subsequently, in the CCA mode, the base diameter increases with time while the contact angle remains constant. The dropwise condensation model regulated by subcooling temperature does not reproduce the experimental results. Because the subcooling temperature is not constant in the case of a slow condensation rate, this model is not applicable to the condensation of the long time scale ( several tens of minutes). The contact angle of water microdroplets ( several μm) tended to be smaller than the macro contact angle. Two hypotheses are proposed as the cause of small contact angles: electrowetting and the coalescence of sub- μm water droplets.

  6. WEST-3 wind turbine simulator development

    NASA Technical Reports Server (NTRS)

    Hoffman, J. A.; Sridhar, S.

    1985-01-01

    The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.

  7. Kinetics of phloretin binding to phosphatidylcholine vesicle membranes

    PubMed Central

    1980-01-01

    The submillisecond kinetics for phloretin binding to unilamellar phosphatidylcholine (PC) vesicles was investigated using the temperature-jump technique. Spectrophotometric studies of the equilibrium binding performed at 328 nm demonstrated that phloretin binds to a single set of independent, equivalent sites on the vesicle with a dissociation constant of 8.0 microM and a lipid/site ratio of 4.0. The temperature of the phloretin-vesicle solution was jumped by 4 degrees C within 4 microseconds producing a monoexponential, concentration-dependent relaxation process with time constants in the 30--200-microseconds time range. An analysis of the concentration dependence of relaxation time constants at pH 7.30 and 24 degrees C yielded a binding rate constant of 2.7 X 10(8) M-1 s-1 and an unbinding constant of 2,900 s-1; approximately 66 percent of total binding sites are exposed at the outer vesicle surface. The value of the binding rate constant and three additional observations suggest that the binding kinetics are diffusion limited. The phloretin analogue, naringenin, which has a diffusion coefficient similar to phloretin yet a dissociation constant equal to 24 microM, bound to PC vesicle with the same rate constant as phloretin did. In addition, the phloretin-PC system was studied in buffers made one to six times more viscous than water by addition of sucrose or glycerol to the differ. The equilibrium affinity for phloretin binding to PC vesicles is independent of viscosity, yet the binding rate constant decreases with the expected dependence (kappa binding alpha 1/viscosity) for diffusion-limited processes. Thus, the binding rate constant is not altered by differences in binding affinity, yet depends upon the diffusion coefficient in buffer. Finally, studies of the pH dependence of the binding rate constant showed a dependence (kappa binding alpha [1 + 10pH-pK]) consistent with the diffusion-limited binding of a weak acid. PMID:7391812

  8. Anti-anthropic solutions to the cosmic coincidence problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedrow, Joseph M.; Griest, Kim, E-mail: j.m.fedrow@gmail.com, E-mail: kgriest@ucsd.edu

    2014-01-01

    A cosmological constant fits all current dark energy data, but requires two extreme fine tunings, both of which are currently explained by anthropic arguments. Here we discuss anti-anthropic solutions to one of these problems: the cosmic coincidence problem- that today the dark energy density is nearly equal to the matter density. We replace the ensemble of Universes used in the anthropic solution with an ensemble of tracking scalar fields that do not require fine-tuning. This not only does away with the coincidence problem, but also allows for a Universe that has a very different future than the one currently predictedmore » by a cosmological constant. These models also allow for transient periods of significant scalar field energy (SSFE) over the history of the Universe that can give very different observational signatures as compared with a cosmological constant, and so can be confirmed or disproved in current and upcoming experiments.« less

  9. A new momentum management controller for the space station

    NASA Technical Reports Server (NTRS)

    Wie, B.; Byun, K. W.; Warren, V. W.

    1988-01-01

    A new approach to CMG (control moment gyro) momentum management and attitude control of the Space Station is developed. The control algorithm utilizes both the gravity-gradient and gyroscopic torques to seek torque equilibrium attitude in the presence of secular and cyclic disturbances. Depending upon mission requirements, either pitch attitude or pitch-axis CMG momentum can be held constant: yaw attitude and roll-axis CMG momentum can be held constant, while roll attitude and yaw-axis CMG momentum cannot be held constant. As a result, the overall attitude and CMG momentum oscillations caused by cyclic aero-dynamic disturbances are minimized. A state feedback controller with minimal computer storage requirement for gain scheduling is also developed. The overall closed-loop system is stable for + or - 30 percent inertia matrix variations and has more than + or - 10 dB and 45 deg stability margins in each loop.

  10. Constants of the motion, universal time and the Hamilton-Jacobi function in general relativity

    NASA Astrophysics Data System (ADS)

    O'Hara, Paul

    2013-04-01

    In most text books of mechanics, Newton's laws or Hamilton's equations of motion are first written down and then solved based on initial conditions to determine the constants of the motions and to describe the trajectories of the particles. In this essay, we take a different starting point. We begin with the metrics of general relativity and show how they can be used to construct by inspection constants of motion, which can then be used to write down the equations of the trajectories. This will be achieved by deriving a Hamiltonian-Jacobi function from the metric and showing that its existence requires all of the above mentioned properties. The article concludes by showing that a consistent theory of such functions also requires the need for a universal measure of time which can be identified with the "worldtime" parameter, first introduced by Steuckelberg and later developed by Horwitz and Piron.

  11. Illusion thermal device based on material with constant anisotropic thermal conductivity for location camouflage

    NASA Astrophysics Data System (ADS)

    Hou, Quanwen; Zhao, Xiaopeng; Meng, Tong; Liu, Cunliang

    2016-09-01

    Thermal metamaterials and devices based on transformation thermodynamics often require materials with anisotropic and inhomogeneous thermal conductivities. In this study, still based on the concept of transformation thermodynamics, we designed a planar illusion thermal device, which can delocalize a heat source in the device such that the temperature profile outside the device appears to be produced by a virtual source at another position. This device can be constructed by only one kind of material with constant anisotropic thermal conductivity. The condition which should be satisfied by the device is provided, and the required anisotropic thermal conductivity is then deduced theoretically. This study may be useful for the designs of metamaterials or devices since materials with constant anisotropic parameters have great facility in fabrication. A prototype device has been fabricated based on a composite composed by two naturally occurring materials. The experimental results validate the effectiveness of the device.

  12. Tick-Tock, Tick-Tock..."Good Management Begins with Good People"

    ERIC Educational Resources Information Center

    Vicars, Dennis

    2011-01-01

    Leaders are constantly being brought into situations they did not anticipate or telephone calls that extend well beyond the promised period. Directors' lives are a constant turmoil with daily challenges such as: staff illnesses, which create ratio problems; children with knee bruises, requiring ice and TLC; and don't forget the parents who require…

  13. Why Batteries Deliver a Fairly Constant Voltage until Dead

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md. Mainul; MacCarthy, Patrick

    2012-01-01

    Two characteristics of batteries, their delivery of nearly constant voltage and their rapid failure, are explained through a visual examination of the Nernst equation. Two Galvanic cells are described in detail: (1) a wet cell involving iron and copper salts and (2) a mercury oxide dry cell. A complete description of the wet cell requires a…

  14. A Rationale for Incorporating Extemporaneous Lincoln-Douglas Debate into Individual Events Tournaments.

    ERIC Educational Resources Information Center

    Huebner, Thomas M., Jr.

    Intercollegiate competitive speech and debate is at a crossroads requiring massive reforms, or the academic exercise will lose its power to provide lasting benefits. Likewise, the scope of individual events is in a constant state of change. One constant, however, is the value of extemporaneous speaking which encompasses many of the ideals…

  15. Modeling a constant power load for nickel-hydrogen battery testing using SPICE

    NASA Technical Reports Server (NTRS)

    Bearden, Douglas B.; Lollar, Louis F.; Nelms, R. M.

    1990-01-01

    The effort to design and model a constant power load for the HST (Hubble Space Telescope) nickel-hydrogen battery tests is described. The constant power load was designed for three different simulations on the batteries: life cycling, reconditioning, and capacity testing. A dc-dc boost converter was designed to act as this constant power load. A boost converter design was chosen because of the low test battery voltage (4 to 6 VDC) generated and the relatively high power requirement of 60 to 70 W. The SPICE model was shown to consistently predict variations in the actual circuit as various designs were attempted. It is concluded that the confidence established in the SPICE model of the constant power load ensures its extensive utilization in future efforts to improve performance in the actual load circuit.

  16. Expert system constant false alarm rate processor

    NASA Astrophysics Data System (ADS)

    Baldygo, William J., Jr.; Wicks, Michael C.

    1993-10-01

    The requirements for high detection probability and low false alarm probability in modern wide area surveillance radars are rarely met due to spatial variations in clutter characteristics. Many filtering and CFAR detection algorithms have been developed to effectively deal with these variations; however, any single algorithm is likely to exhibit excessive false alarms and intolerably low detection probabilities in a dynamically changing environment. A great deal of research has led to advances in the state of the art in Artificial Intelligence (AI) and numerous areas have been identified for application to radar signal processing. The approach suggested here, discussed in a patent application submitted by the authors, is to intelligently select the filtering and CFAR detection algorithms being executed at any given time, based upon the observed characteristics of the interference environment. This approach requires sensing the environment, employing the most suitable algorithms, and applying an appropriate multiple algorithm fusion scheme or consensus algorithm to produce a global detection decision.

  17. Solid-state Bonding of Superplastic Aluminum Alloy 7475 Sheet

    NASA Technical Reports Server (NTRS)

    Byun, T. D. S.; Vastava, R. B.

    1985-01-01

    Experimental works were carried out to study the feasibility of solid state bonding of superplastic aluminum 7475 sheet. Amount of deformation, bonding time, surface cleaning method and intermediate layer were the process parameters investigated. Other parameters, held constant by the superplastic forming condition which is required to obtain a concurrent solid state bonding, are bonding temperature, bonding pressure and atmosphere. Bond integrity was evaluated through metallographic examination, X-ray line scan analysis, SEM fractographic analysis and lap shear tests. The early results of the development program indicated that sound solid state bonding was accomplished for this high strength 7475 alloy with significant amounts of deformation. A thin intermediate layer of the soft 5052 aluminum alloy aided in achieving a solid state bonding by reducing the required amount of plastic deformation at the interface. Bond strength was substantially increased by a post bond heat treatment.

  18. Can the baryon asymmetry arise from initial conditions?

    DOE PAGES

    Krnjaic, Gordan

    2017-08-01

    In this letter, we quantify the challenge of explaining the baryon asymmetry using initial conditions in a universe that undergoes inflation. Contrary to lore, we find that such an explanation is possible if netmore » $B-L$ number is stored in a light bosonic field with hyper-Planckian initial displacement and a delicately chosen field velocity prior to inflation. However, such a construction may require extremely tuned coupling constants to ensure that this asymmetry is viably communicated to the Standard Model after reheating; the large field displacement required to overcome inflationary dilution must not induce masses for Standard Model particles or generate dangerous washout processes. While these features are inelegant, this counterexample nonetheless shows that there is no theorem against such an explanation. We also comment on potential observables in the double $$\\beta$$-decay spectrum and on model variations that may allow for more natural realizations.« less

  19. [Requirements for the successful installation of an data management system].

    PubMed

    Benson, M; Junger, A; Quinzio, L; Hempelmann, G

    2002-08-01

    Due to increasing requirements on medical documentation, especially with reference to the German Social Law binding towards quality management and introducing a new billing system (DRGs), an increasing number of departments consider to implement a patient data management system (PDMS). The installation should be professionally planned as a project in order to insure and complete a successful installation. The following aspects are essential: composition of the project group, definition of goals, finance, networking, space considerations, hardware, software, configuration, education and support. Project and finance planning must be prepared before beginning the project and the project process must be constantly evaluated. In selecting the software, certain characteristics should be considered: use of standards, configurability, intercommunicability and modularity. Our experience has taught us that vaguely defined goals, insufficient project planning and the existing management culture are responsible for the failure of PDMS installations. The software used tends to play a less important role.

  20. Doubling transmission capacity in optical wireless system by antenna horizontal- and vertical-polarization multiplexing.

    PubMed

    Li, Xinying; Yu, Jianjun; Zhang, Junwen; Dong, Ze; Chi, Nan

    2013-06-15

    We experimentally demonstrate 2×56 Gb/s two-channel polarization-division-multiplexing quadrature-phase-shift-keying signal delivery over 80 km single-mode fiber-28 and 2 m Q-band (33-50 GHz) wireless link, adopting antenna horizontal- (H-) and vertical-polarization (V-polarization) multiplexing. At the wireless receiver, classic constant-modulus-algorithm equalization based on digital signal processing can realize polarization demultiplexing and remove the crosstalk at the same antenna polarization. By adopting antenna polarization multiplexing, the signal baud rate and performance requirements for optical and wireless devices can be reduced but at the cost of double antennas and devices, while wireless transmission capacity can also be increased but at the cost of stricter requirements for V-polarization. The isolation is only about 19 dB when V-polarization deviation approaches 10°, which will affect high-speed (>50 Gb/s) wireless delivery.

  1. Method to fabricate a tilted logpile photonic crystal

    DOEpatents

    Williams, John D.; Sweatt, William C.

    2010-10-26

    A method to fabricate a tilted logpile photonic crystal requires only two lithographic exposures and does not require mask repositioning between exposures. The mask and photoresist-coated substrate are spaced a fixed and constant distance apart using a spacer and the stack is clamped together. The stack is then tilted at a crystallographic symmetry angle (e.g., 45 degrees) relative to the X-ray beam and rotated about the surface normal until the mask is aligned with the X-ray beam. The stack is then rotated in plane by a small stitching angle and exposed to the X-ray beam to pattern the first half of the structure. The stack is then rotated by 180.degree. about the normal and a second exposure patterns the remaining half of the structure. The method can use commercially available DXRL scanner technology and LIGA processes to fabricate large-area, high-quality tilted logpile photonic crystals.

  2. Developing learning environments which support early algebraic reasoning: a case from a New Zealand primary classroom

    NASA Astrophysics Data System (ADS)

    Hunter, Jodie

    2014-12-01

    Current reforms in mathematics education advocate the development of mathematical learning communities in which students have opportunities to engage in mathematical discourse and classroom practices which underlie algebraic reasoning. This article specifically addresses the pedagogical actions teachers take which structure student engagement in dialogical discourse and activity which facilitates early algebraic reasoning. Using videotaped recordings of classroom observations, the teacher and researcher collaboratively examined the classroom practices and modified the participatory practices to develop a learning environment which supported early algebraic reasoning. Facilitating change in the classroom environment was a lengthy process which required consistent and ongoing attention initially to the social norms and then to the socio-mathematical norms. Specific pedagogical actions such as the use of specifically designed tasks, materials and representations and a constant press for justification and generalisation were required to support students to link their numerical understandings to algebraic reasoning.

  3. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  4. Impact of structure and morphology of nanostructured ceria coating on AISI 304 oxidation kinetics

    NASA Astrophysics Data System (ADS)

    Aadhavan, R.; Suresh Babu, K.

    2017-07-01

    Nanostructured ceria-based coatings are shown to be protective against high-temperature oxidation of AISI 304 due to the dynamics of oxidation state and associated defects. However, the processing parameters of deposition have a strong influence in determining the structural and morphological aspects of ceria. The present work focuses on the effect of variation in substrate temperature (50-300 °C) and deposition rate (0.1-50 Å/s) of ceria in electron beam physical vapour evaporation method and correlates the changes in structure and morphology to high-temperature oxidation protection. Unlike deposition rate, substrate temperature exhibited a profound influence on crystallite size (7-18 nm) and oxygen vacancy concentration. Upon isothermal oxidation at 1243 K for 24 h, bare AISI 304 exhibited a linear mass gain with a rate constant of 3.0 ± 0.03 × 10-3 kg2 m-4 s-1 while ceria coating lowered the kinetics by 3-4 orders. Though the thickness of the coating was kept constant at 2 μm, higher deposition rate offered one order lower protection due to the porous nature of the coating. Variation in the substrate temperature modulated the porosity as well as oxygen vacancy concentration and displayed the best protection for coatings deposited at moderate substrate temperature. The present work demonstrates the significance of selecting appropriate processing parameters to obtain the required morphology for efficient high-temperature oxidation protection.

  5. Switched-capacitor realization of presynaptic short-term-plasticity and stop-learning synapses in 28 nm CMOS

    PubMed Central

    Noack, Marko; Partzsch, Johannes; Mayr, Christian G.; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene

    2015-01-01

    Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm2 and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling. PMID:25698914

  6. Nozzle Aerodynamic Stability During a Throat Shift

    NASA Technical Reports Server (NTRS)

    Kawecki, Edwin J.; Ribeiro, Gregg L.

    2005-01-01

    An experimental investigation was conducted on the internal aerodynamic stability of a family of two-dimensional (2-D) High Speed Civil Transport (HSCT) nozzle concepts. These nozzles function during takeoff as mixer-ejectors to meet acoustic requirements, and then convert to conventional high-performance convergent-divergent (CD) nozzles at cruise. The transition between takeoff mode and cruise mode results in the aerodynamic throat and the minimum cross-sectional area that controls the engine backpressure shifting location within the nozzle. The stability and steadiness of the nozzle aerodynamics during this so called throat shift process can directly affect the engine aerodynamic stability, and the mechanical design of the nozzle. The objective of the study was to determine if pressure spikes or other perturbations occurred during the throat shift process and, if so, identify the caused mechanisms for the perturbations. The two nozzle concepts modeled in the test program were the fixed chute (FC) and downstream mixer (DSM). These 2-D nozzles differ principally in that the FC has a large over-area between the forward throat and aft throat locations, while the DSM has an over-area of only about 10 percent. The conclusions were that engine mass flow and backpressure can be held constant simultaneously during nozzle throat shifts on this class of nozzles, and mode shifts can be accomplished at a constant mass flow and engine backpressure without upstream pressure perturbations.

  7. Wearable Contact Lens Biosensors for Continuous Glucose Monitoring Using Smartphones.

    PubMed

    Elsherif, Mohamed; Hassan, Mohammed Umair; Yetisen, Ali K; Butt, Haider

    2018-05-17

    Low-cost, robust, and reusable continuous glucose monitoring systems that can provide quantitative measurements at point-of-care settings is an unmet medical need. Optical glucose sensors require complex and time-consuming fabrication processes, and their readouts are not practical for quantitative analyses. Here, a wearable contact lens optical sensor was created for the continuous quantification of glucose at physiological conditions, simplifying the fabrication process and facilitating smartphone readouts. A photonic microstructure having a periodicity of 1.6 μm was printed on a glucose-selective hydrogel film functionalized with phenylboronic acid. Upon binding with glucose, the microstructure volume swelled, which modulated the periodicity constant. The resulting change in the Bragg diffraction modulated the space between zero- and first-order spots. A correlation was established between the periodicity constant and glucose concentration within 0-50 mM. The sensitivity of the sensor was 12 nm mM -1 , and the saturation response time was less than 30 min. The sensor was integrated with commercial contact lenses and utilized for continuous glucose monitoring using smartphone camera readouts. The reflected power of the first-order diffraction was measured via a smartphone application and correlated to the glucose concentrations. A short response time of 3 s and a saturation time of 4 min was achieved in the continuous monitoring mode. Glucose-sensitive photonic microstructures may have applications in point-of-care continuous monitoring devices and diagnostics at home settings.

  8. LHCb detector and trigger performance in Run II

    NASA Astrophysics Data System (ADS)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  9. Modeling and Real-Time Process Monitoring of Organometallic Chemical Vapor Deposition of III-V Phosphides and Nitrides at Low and High Pressure

    NASA Technical Reports Server (NTRS)

    Bachmann, K. J.; Cardelino, B. H.; Moore, C. E.; Cardelino, C. A.; Sukidi, N.; McCall, S.

    1999-01-01

    The purpose of this paper is to review modeling and real-time monitoring by robust methods of reflectance spectroscopy of organometallic chemical vapor deposition (OMCVD) processes in extreme regimes of pressure. The merits of p-polarized reflectance spectroscopy under the conditions of chemical beam epitaxy (CBE) and of internal transmission spectroscopy and principal angle spectroscopy at high pressure are assessed. In order to extend OMCVD to materials that exhibit large thermal decomposition pressure at their optimum growth temperature we have designed and built a differentially-pressure-controlled (DCP) OMCVD reactor for use at pressures greater than or equal to 6 atm. We also describe a compact hard-shell (CHS) reactor for extending the pressure range to 100 atm. At such very high pressure the decomposition of source vapors occurs in the vapor phase, and is coupled to flow dynamics and transport. Rate constants for homogeneous gas phase reactions can be predicted based on a combination of first principles and semi-empirical calculations. The pressure dependence of unimolecular rate constants is described by RRKM theory, but requires variational and anharmonicity corrections not included in presently available calculations with the exception of ammonia decomposition. Commercial codes that include chemical reactions and transport exist, but do not adequately cover at present the kinetics of heteroepitaxial crystal growth.

  10. Symmetry based frequency domain processing to remove harmonic noise from surface nuclear magnetic resonance measurements

    NASA Astrophysics Data System (ADS)

    Hein, Annette; Larsen, Jakob Juul; Parsekian, Andrew D.

    2017-02-01

    Surface nuclear magnetic resonance (NMR) is a unique geophysical method due to its direct sensitivity to water. A key limitation to overcome is the difficulty of making surface NMR measurements in environments with anthropogenic electromagnetic noise, particularly constant frequency sources such as powerlines. Here we present a method of removing harmonic noise by utilizing frequency domain symmetry of surface NMR signals to reconstruct portions of the spectrum corrupted by frequency-domain noise peaks. This method supplements the existing NMR processing workflow and is applicable after despiking, coherent noise cancellation, and stacking. The symmetry based correction is simple, grounded in mathematical theory describing NMR signals, does not introduce errors into the data set, and requires no prior knowledge about the harmonics. Modelling and field examples show that symmetry based noise removal reduces the effects of harmonics. In one modelling example, symmetry based noise removal improved signal-to-noise ratio in the data by 10 per cent. This improvement had noticeable effects on inversion parameters including water content and the decay constant T2*. Within water content profiles, aquifer boundaries and water content are more accurate after harmonics are removed. Fewer spurious water content spikes appear within aquifers, which is especially useful for resolving multilayered structures. Within T2* profiles, estimates are more accurate after harmonics are removed, especially in the lower half of profiles.

  11. Role of veterinarians in modern food hygiene

    PubMed Central

    Matyáš, Z.

    1978-01-01

    Veterinary services and veterinary education and training must keep pace with the constantly changing patterns of agriculture and food processing. Changes in methods of animal production are associated with many problems of food processing and food quality. Veterinary supervision of the animal feed industry and of meat and distribution is essential. Quality testing of meat, milk, and eggs requires the introduction of suitable routine sampling systems, laboratory procedures, and complex evaluation procedures. Food hygiene problems have changed in recent years not only as a result of new methods of animal production, but also because of changes in food processing technology and in the presentation of food to the consumer, increased environmental pollution, increased international trade, and increased tourist travel. Food hygienists must adopt an active and progressive policy and change the scope of food control from a purely negative measure into a positive force working towards improved food quality and the avoidance of losses during production. A modern food hygiene programme should cover all stages of production, processing, and distribution of food and also other ingredients, additives and the water used for production and processing. Veterinarians should also be involved in the registration and licensing of enterprises and this should take into account the premises, the procedures to be used, new techniques in animal husbandry, machines and equipment, etc. In order to facilitate the microbiological analysis of foodstuffs, new mechanized or automated laboratory methods are required, and consideration must be given to adequate sampling techniques. PMID:310716

  12. Very high pressure liquid chromatography using core-shell particles: quantitative analysis of fast gradient separations without post-run times.

    PubMed

    Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois A; Guiochon, Georges

    2014-01-17

    Five methods for controlling the mobile phase flow rate for gradient elution analyses using very high pressure liquid chromatography (VHPLC) were tested to determine thermal stability of the column during rapid gradient separations. To obtain rapid separations, instruments are operated at high flow rates and high inlet pressure leading to uneven thermal effects across columns and additional time needed to restore thermal equilibrium between successive analyses. The purpose of this study is to investigate means to minimize thermal instability and obtain reliable results by measuring the reproducibility of the results of six replicate gradient separations of a nine component RPLC standard mixture under various experimental conditions with no post-run times. Gradient separations under different conditions were performed: constant flow rates, two sets of constant pressure operation, programmed flow constant pressure operation, and conditions which theoretically should yield a constant net heat loss at the column's wall. The results show that using constant flow rates, programmed flow constant pressures, and constant heat loss at the column's wall all provide reproducible separations. However, performing separations using a high constant pressure with programmed flow reduces the analysis time by 16% compared to constant flow rate methods. For the constant flow rate, programmed flow constant pressure, and constant wall heat experiments no equilibration time (post-run time) was required to obtain highly reproducible data. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Selecting β-glucosidases to support cellulases in cellulose saccharification

    PubMed Central

    2013-01-01

    Background Enzyme end-product inhibition is a major challenge in the hydrolysis of lignocellulose at a high dry matter consistency. β-glucosidases (BGs) hydrolyze cellobiose into two molecules of glucose, thereby relieving the product inhibition of cellobiohydrolases (CBHs). However, BG inhibition by glucose will eventually lead to the accumulation of cellobiose and the inhibition of CBHs. Therefore, the kinetic properties of candidate BGs must meet the requirements determined by both the kinetic properties of CBHs and the set-up of the hydrolysis process. Results The kinetics of cellobiose hydrolysis and glucose inhibition of thermostable BGs from Acremonium thermophilum (AtBG3) and Thermoascus aurantiacus (TaBG3) was studied and compared to Aspergillus sp. BG purified from Novozyme®188 (N188BG). The most efficient cellobiose hydrolysis was achieved with TaBG3, followed by AtBG3 and N188BG, whereas the enzyme most sensitive to glucose inhibition was AtBG3, followed by TaBG3 and N188BG. The use of higher temperatures had an advantage in both increasing the catalytic efficiency and relieving the product inhibition of the enzymes. Our data, together with data from a literature survey, revealed a trade-off between the strength of glucose inhibition and the affinity for cellobiose; therefore, glucose-tolerant BGs tend to have low specificity constants for cellobiose hydrolysis. However, although a high specificity constant is always an advantage, in separate hydrolysis and fermentation, the priority may be given to a higher tolerance to glucose inhibition. Conclusions The specificity constant for cellobiose hydrolysis and the inhibition constant for glucose are the most important kinetic parameters in selecting BGs to support cellulases in cellulose hydrolysis. PMID:23883540

  14. Raptor: An Empirical Evaluation of an Ecological Interface Designed to Increase Warfighter Cognitive Performance

    DTIC Science & Technology

    2009-06-01

    McGuinness, 2004). Endsley (1995) portrays a warfighter’s situation valuation process through three distinct levels of SA:  Level 1 – perception...must constantly 37 know resource statuses for both the higher and intermediate organizational levels (i.e., battalion [BN] and company [CO...resource icons were held constant at the company level hierarchy, and the magnification reticule was disabled. Holding these tools constant

  15. Toward an understanding of the turbidity measurement of heterocoagulation rate constants of dispersions containing particles of different sizes.

    PubMed

    Liu, Jie; Xu, Shenghua; Sun, Zhiwei

    2007-11-06

    Our previous studies have shown that the determination of coagulation rate constants by turbidity measurement becomes impossible for a certain operating wavelength (that is, its blind point) because at this wavelength the change in the turbidity of a dispersion completely loses its response to the coagulation process. Therefore, performing the turbidity measurement in the wavelength range near the blind point should be avoided. In this article, we demonstrate that the turbidity measurement of the rate constant for coagulation of a binary dispersion containing particles of two different sizes (heterocoagulation) presents special difficulties because the blind point shifts with not only particle size but also with the component fraction. Some important aspects of the turbidity measurement for the heterocoagulation rate constant are discussed and experimentally tested. It is emphasized that the T-matrix method can be used to correctly evaluate extinction cross sections of doublets formed during the heterocoagulation process, which is the key data determining the rate constant from the turbidity measurement, and choosing the appropriate operating wavelength and component fraction are important to achieving a more accurate rate constant. Finally, a simple scheme in experimentally determining the sensitivity of the turbidity changes with coagulation over a wavelength range is proposed.

  16. A nonmonotonic dependence of standard rate constant on reorganization energy for heterogeneous electron transfer processes on electrode surface

    NASA Astrophysics Data System (ADS)

    Xu, Weilin; Li, Songtao; Zhou, Xiaochun; Xing, Wei; Huang, Mingyou; Lu, Tianhong; Liu, Changpeng

    2006-05-01

    In the present work a nonmonotonic dependence of standard rate constant (k0) on reorganization energy (λ) was discovered qualitatively from electron transfer (Marcus-Hush-Levich) theory for heterogeneous electron transfer processes on electrode surface. It was found that the nonmonotonic dependence of k0 on λ is another result, besides the disappearance of the famous Marcus inverted region, coming from the continuum of electronic states in electrode: with the increase of λ, the states for both Process I and Process II ET processes all vary from nonadiabatic to adiabatic state continuously, and the λ dependence of k0 for Process I is monotonic thoroughly, while for Process II on electrode surface the λ dependence of k0 could show a nonmonotonicity.

  17. Constitutive Model Constants for Al7075-T651 and Al7075-T6

    NASA Astrophysics Data System (ADS)

    Brar, N. S.; Joshi, V. S.; Harris, B. W.

    2009-12-01

    Aluminum 7075-T651 and 7075-T6 are characterized at quasi-static and high strain rates to determine Johnson-Cook (J-C) strength and fracture model constants. Constitutive model constants are required as input to computer codes to simulate projectile (fragment) impact or similar impact events on structural components made of these materials. Although the two tempers show similar elongation at breakage, the ultimate tensile strength of T651 temper is generally lower than the T6 temper. Johnson-Cook strength model constants (A, B, n, C, and m) for the two alloys are determined from high strain rate tension stress-strain data at room and high temperature to 250°C. The Johnson-Cook fracture model constants are determined from quasi-static and medium strain rate as well as high temperature tests on notched and smooth tension specimens. Although the J-C strength model constants are similar, the fracture model constants show wide variations. Details of the experimental method used and the results for the two alloys are presented.

  18. Inflation with a constant rate of roll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi, E-mail: motohashi@kicp.uchicago.edu, E-mail: alstar@landau.ac.ru, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by {sup ··}φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs formore » unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.« less

  19. The elimination of siloxanes from the biogas of a wastewater treatment plant by means of an adsorption process.

    PubMed

    Trapote, Arturo; García, Mariano; Prats, Daniel

    2016-12-01

    Siloxanes present in the biogas produced during anaerobic digestion in wastewater treatment plants (WWTPs) can damage the mechanism of cogeneration heat engines and obstruct the process of energy valorization. The objective of this research is to detect the presence of siloxanes in the biogas and evaluate a procedure for their elimination. A breakthrough curve of a synthetic decamethylcyclopentasiloxane on an experimental bed of activated carbon was modeled and the theoretical mathematical model of the adsorption process was adjusted. As a result, the constants of the model were obtained: the mass transfer constant, Henry's equilibrium constant, and the Eddy diffusion. The procedure developed allows the adsorption equilibrium of siloxanes on activated carbon to be predicted, and makes it possible to lay the basis for the design of an appropriate activated carbon module for the elimination of siloxanes in a WWTP.

  20. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  1. Determination of elastic constants of a generally orthotropic plate by modal analysis

    NASA Astrophysics Data System (ADS)

    Lai, T. C.; Lau, T. C.

    1993-01-01

    This paper describes a method of finding the elastic constants of a generally orthotropic composite thin plate through modal analysis based on a Rayleigh-Ritz formulation. The natural frequencies and mode shapes for a plate with free-free boundary conditions are obtained with chirp excitation. Based on the eigenvalue equation and the constitutive equations of the plate, an iteration scheme is derived using the experimentally determined natural frequencies to arrive at a set of converged values for the elastic constants. Four sets of experimental data are required for the four independent constants: namely the two Young's moduli E1 and E2, the in-plane shear modulus G12, and one Poisson's ratio nu12. The other Poisson's ratio nu21 can then be determined from the relationship among the constants. Comparison with static test results indicate good agreement. Choosing the right combinations of natural modes together with a set of reasonable initial estimates for the constants to start the iteration has been found to be crucial in achieving convergence.

  2. Review of cost versus scale: water and wastewater treatment and reuse processes.

    PubMed

    Guo, Tianjiao; Englehardt, James; Wu, Tingting

    2014-01-01

    The US National Research Council recently recommended direct potable water reuse (DPR), or potable water reuse without environmental buffer, for consideration to address US water demand. However, conveyance of wastewater and water to and from centralized treatment plants consumes on average four times the energy of treatment in the USA, and centralized DPR would further require upgradient distribution of treated water. Therefore, information on the cost of unit treatment processes potentially useful for DPR versus system capacity was reviewed, converted to constant 2012 US dollars, and synthesized in this work. A logarithmic variant of the Williams Law cost function was found applicable over orders of magnitude of system capacity, for the subject processes: activated sludge, membrane bioreactor, coagulation/flocculation, reverse osmosis, ultrafiltration, peroxone and granular activated carbon. Results are demonstrated versus 10 DPR case studies. Because economies of scale found for capital equipment are counterbalanced by distribution/collection network costs, further study of the optimal scale of distributed DPR systems is suggested.

  3. Interpretation and mapping of gypsy moth defoilation from ERTS (LANDSAT)-1 temporal composites

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Kowalik, W. S.

    1975-01-01

    The author has identified the following significant results. Photointerpretation of temporally composited color Diazo transparencies of ERTS(LANDSAT) images is a practical method for detecting and locating levels of widespread defoliation. ERTS 9 x 9 inch images are essentially orthographic and are produced at a nearly constant 1:1,000,000 scale. This allows direct superposition of scenes for temporal composites. ERTS coverage provides a sweeping 180 km (110 mile) wide view, permitting one interpreter to rapidly delineate defoliation in an area requiring days and weeks of work by aerial surveys or computerized processing. Defoliation boundaries can be located on the images within maximum errors on the order of hundreds of meters. The enhancement process is much less expensive than aerial surveys or computerized processing. Maps produced directly from interpretation are manageable working products. The 18 day periodic coverage of ERTS is not frequent enough to replace aerial survey mapping because defoliation and refoliation move as waves.

  4. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    NASA Astrophysics Data System (ADS)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  5. Contamination assessment for OSSA space station IOC payloads

    NASA Technical Reports Server (NTRS)

    Chinn, S.; Gordon, T.; Rantanen, R.

    1987-01-01

    The results are presented from a study for the Space Station Planners Group of the Office of Space Sciences and Applications. The objectives of the study are: (1) the development of contamination protection requirements for protection of Space Station attached payloads, serviced payloads and platforms; and (2) the determination of unknowns or major impacts requiring further assessment. The nature, sources, and quantitative properties of the external contaminants to be encountered on the Station are summarized. The OSSA payload contamination protection requirements provided by the payload program managers are reviewed and the level of contamination awareness among them is discussed. Preparation of revisions to the contamination protection requirements are detailed. The comparative impact of flying the Station at constant atmospheric density rather than constant altitude is assessed. The impact of the transverse boom configuration of the Station on contamination is also assessed. The contamination protection guidelines which OSSA should enforce during their development of payloads are summarized.

  6. Scalloping minimization in deep Si etching on Unaxis DSE tools

    NASA Astrophysics Data System (ADS)

    Lai, Shouliang; Johnson, Dave J.; Westerman, Russ J.; Nolan, John J.; Purser, David; Devre, Mike

    2003-01-01

    Sidewall smoothness is often a critical requirement for many MEMS devices, such as microfludic devices, chemical, biological and optical transducers, while fast silicon etch rate is another. For such applications, the time division multiplex (TDM) etch processes, so-called "Bosch" processes are widely employed. However, in the conventional TDM processes, rough sidewalls result due to scallop formation. To date, the amplitude of the scalloping has been directly linked to the silicon etch rate. At Unaxis USA Inc., we have developed a proprietary fast gas switching technique that is effective for scalloping minimization in deep silicon etching processes. In this technique, process cycle times can be reduced from several seconds to as little as a fraction of second. Scallop amplitudes can be reduced with shorter process cycles. More importantly, as the scallop amplitude is progressively reduced, the silicon etch rate can be maintained relatively constant at high values. An optimized experiment has shown that at etch rate in excess of 7 μm/min, scallops with length of 116 nm and depth of 35 nm were obtained. The fast gas switching approach offers an ideal manufacturing solution for MEMS applications where extremely smooth sidewall and fast etch rate are crucial.

  7. Amino acids production focusing on fermentation technologies - A review.

    PubMed

    D'Este, Martina; Alvarado-Morales, Merlin; Angelidaki, Irini

    Amino acids are attractive and promising biochemicals with market capacity requirements constantly increasing. Their applicability ranges from animal feed additives, flavour enhancers and ingredients in cosmetic to specialty nutrients in pharmaceutical and medical fields. This review gives an overview of the processes applied for amino acids production and points out the main advantages and disadvantages of each. Due to the advances made in the genetic engineering techniques, the biotechnological processes, and in particular the fermentation with the aid of strains such as Corynebacterium glutamicum or Escherichia coli, play a significant role in the industrial production of amino acids. Despite the numerous advantages of the fermentative amino acids production, the process still needs significant improvements leading to increased productivity and reduction of the production costs. Although the production processes of amino acids have been extensively investigated in previous studies, a comprehensive overview of the developments in bioprocess technology has not been reported yet. This review states the importance of the fermentation process for industrial amino acids production, underlining the strengths and the weaknesses of the process. Moreover, the potential of innovative approaches utilizing macro and microalgae or bacteria are presented. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. An improved model for the dielectric constant of sea water at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Klein, L. A.; Swift, C. T.

    1977-01-01

    The advent of precision microwave radiometry has placed a stringent requirement on the accuracy with which the dielectric constant of sea water must be known. To this end, measurements of the dielectric constant have been conducted at S-band and L-band with a quoted uncertainty of tenths of a percent. These and earlier results are critically examined, and expressions are developed which will yield computations of brightness temperature having an error of no more than 0.3 K for an undisturbed sea at frequencies lower than X-band. At the higher microwave and millimeter wave frequencies, the accuracy is in question because of uncertainties in the relaxation time and the dielectric constant at infinite frequency.

  9. Current observations with a decaying cosmological constant allow for chaotic cyclic cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, George F.R.; Platts, Emma; Weltman, Amanda

    2016-04-01

    We use the phase plane analysis technique of Madsen and Ellis [1] to consider a universe with a true cosmological constant as well as a cosmological 'constant' that is decaying. Time symmetric dynamics for the inflationary era allows eternally bouncing models to occur. Allowing for scalar field dynamic evolution, we find that if dark energy decays in the future, chaotic cyclic universes exist provided the spatial curvature is positive. This is particularly interesting in light of current observations which do not yet rule out either closed universes or possible evolution of the cosmological constant. We present only a proof ofmore » principle, with no definite claim on the physical mechanism required for the present dark energy to decay.« less

  10. Medical Services: Standards of Medical Fitness

    DTIC Science & Technology

    2002-03-28

    Malfunction of the acoustic nerve. (Evaluate functional impairment of hearing under para 3–10.) c. Mastoiditis, chronic, with constant drainage from the...mastoid cavity, requiring frequent and prolonged medical care. d. Mastoiditis, chronic, following mastoidectomy, with constant drainage from the...d. Nephrectomy, when after treatment, there is infection or pathology in the remaining kidney. e. Nephrostomy, if drainage persists. f. Oophorectomy

  11. USING IN VIVO GAS UPDATE STUDIES TO ESTIMATE METABOLIC RATE CONSTANTS FOR CCL CHEMICALS: 1,1-DICHLOROPROPANE AND 2,2-DICHLOROPROPANE

    EPA Science Inventory

    USING IN VIVO GAS UPTAKE STUDIES TO ESTIMATE METABOLIC RATE CONSTANTS FOR CCL CHEMICALS: 1,1-DICHLOROPROPENE AND 2,2-DICHLOROPROPANE.
    Mitchell, C T, Evans, M V, Kenyon, E M. NHEERL, U.S. EPA, ORD, ETD, RTP, NC

    The Safe Drinking Water Act Amendments of 1996 required ...

  12. Sorption testing and generalized composite surface complexation models for determining uranium sorption parameters at a proposed in-situ recovery site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.

    Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less

  13. Sorption testing and generalized composite surface complexation models for determining uranium sorption parameters at a proposed in-situ recovery site

    DOE PAGES

    Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...

    2016-02-03

    Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less

  14. Studies on pressure-gain combustion engines

    NASA Astrophysics Data System (ADS)

    Matsutomi, Yu

    Various aspects of the pressure-gain combustion engine are investigated analytically and experimentally in the current study. A lumped parameter model is developed to characterize the operation of a valveless pulse detonation engine. The model identified the function of flame quenching process through gas dynamic process. By adjusting fuel manifold pressure and geometries, the duration of the air buffer can be effectively varied. The parametric study with the lumped parameter model has shown that engine frequency of up to approximately 15 Hz is attainable. However, requirements for upstream air pressure increases significantly with higher engine frequency. The higher pressure requirement indicates pressure loss in the system and lower overall engine performance. The loss of performance due to the pressure loss is a critical issue for the integrated pressure-gain combustors. Two types of transitional methods are examined using entropy-based models. An accumulator based transition has obvious loss due to sudden area expansion, but it can be minimized by utilizing the gas dynamics in the combustion tube. An ejector type transition has potential to achieve performance beyond the limit specified by a single flow path Humphrey cycle. The performance of an ejector was discussed in terms of apparent entropy and mixed flow entropy. Through an ideal ejector, the apparent part of entropy increases due to the reduction in flow unsteadiness, but entropy of the mixed flow remains constant. The method is applied to a CFD simulation with a simple manifold for qualitative evaluation. The operation of the wave rotor constant volume combustion rig is experimentally examined. The rig has shown versatility of operation for wide range of conditions. Large pressure rise in the rotor channel and in a section of the exhaust duct are observed even with relatively large leakage gaps on the rotor. The simplified analysis indicated that inconsistent combustion is likely due to insufficient fuel near the ignition source. However, it is difficult to conclude its fuel distribution with the current setup. Additional measurement near the rotor interfaces and better fuel control are required for the future test.

  15. Calculation of spin-densities within the context of density functional theory. The crucial role of the correlation functional

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-09-01

    It is demonstrated that the LYP correlation functional is not suited to be used for the calculation of electron spin resonance hyperfine structure (HFS) constants, nuclear magnetic resonance spin-spin coupling constants, magnetic, shieldings and other properties that require a balanced account of opposite- and equal-spin correlation, especially in the core region. In the case of the HFS constants of alkali atoms, LYP exaggerates opposite-spin correlation effects thus invoking too strong in-out correlation effects, an exaggerated spin-polarization pattern in the core shells of the atoms, and, consequently, too large HFS constants. Any correlation functional that provides a balanced account of opposite- and equal-spin correlation leads to improved HFS constants, which is proven by comparing results obtained with the LYP and the PW91 correlation functional. It is suggested that specific response properties are calculated with the PW91 rather than the LYP correlation functional.

  16. Fast high-throughput method for the determination of acidity constants by capillary electrophoresis: I. Monoprotic weak acids and bases.

    PubMed

    Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Rosés, Martí

    2009-04-24

    A new and fast method to determine acidity constants of monoprotic weak acids and bases by capillary zone electrophoresis based on the use of an internal standard (compound of similar nature and acidity constant as the analyte) has been developed. This method requires only two electrophoretic runs for the determination of an acidity constant: a first one at a pH where both analyte and internal standard are totally ionized, and a second one at another pH where both are partially ionized. Furthermore, the method is not pH dependent, so an accurate measure of the pH of the buffer solutions is not needed. The acidity constants of several phenols and amines have been measured using internal standards of known pK(a), obtaining a mean deviation of 0.05 pH units compared to the literature values.

  17. Radiometric Dating in Geology.

    ERIC Educational Resources Information Center

    Pankhurst, R. J.

    1980-01-01

    Described are several aspects and methods of quantitatively measuring geologic time using a constant-rate natural process of radioactive decay. Topics include half lives and decay constants, radiogenic growth, potassium-argon dating, rubidium-strontium dating, and the role of geochronology in support of geological exploration. (DS)

  18. [Medical discourse and poetical practice: the different figures of authority within the correspondance between Mme d'Epinay and the abbé Galiani].

    PubMed

    Redien-Collot, Renaud

    2007-01-01

    Letters containing medical data are not simple texts. They stem from a writing process which sees the authors constantly review the way they perceive both their bodies and the way they write. In order to limit the relativism inherent to such processes and reduce the ensuing variability of perspectives, most letter writers eventually assume a form of authority. In the second part of the 18th Century, the correspondence between the abbey Galliano and Mme D'Epinay reveals that while they exchanged details about their health, they also experimented with different positions of authority and adapted their writing process as the relationship evolved. This a salutary lesson for modern researchers who are often tempted to reduce the problematic meaning of the letter writing process, defining the letter as an isolated document. Medical correspondence is exemplary in this respect because it requires a certain level of knowledge and the expression of a certain intimacy, entailing the adoption of one or of several forms of authority.

  19. Overcoming gaps and bottlenecks to advance precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Maintaining a clear understanding of the technology gaps, knowledge needs, and training bottlenecks is required for improving adoption of precision agriculture. As an industry, precision agriculture embraces tools, methods, and practices that are constantly changing, requiring industry, education, a...

  20. On-line adaptive battery impedance parameter and state estimation considering physical principles in reduced order equivalent circuit battery models. Part 1. Requirements, critical review of methods and modeling

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe

    2014-08-01

    Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.

  1. Simultaneous Independent Control of Tool Axial Force and Temperature in Friction Stir Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Kenneth A.; Grant, Glenn J.; Darsell, Jens T.

    Maintaining consistent tool depth relative to the part surface is a critical requirement for many Friction stir processing (FSP) applications. Force control is often used with the goal of obtaining a constant weld depth. When force control is used, if weld temperature decreases, flow stress increases and the tool is pushed up. If weld temperature increases, flow stress decreases and the tool dives. These variations in tool depth and weld temperature cause various types of weld defects. Robust temperature control for FSP maintains a commanded temperature through control of the spindle axis only. Robust temperature control and force control aremore » completely decoupled in control logic and machine motion. This results in stable temperature, force and tool depth despite the presence of geometric and thermal disturbances. Performance of this control method is presented for various weld paths and alloy systems.« less

  2. Switching dynamics of TaOx-based threshold switching devices

    NASA Astrophysics Data System (ADS)

    Goodwill, Jonathan M.; Gala, Darshil K.; Bain, James A.; Skowronski, Marek

    2018-03-01

    Bi-stable volatile switching devices are being used as access devices in solid-state memory arrays and as the active part of compact oscillators. Such structures exhibit two stable states of resistance and switch between them at a critical value of voltage or current. A typical resistance transient under a constant amplitude voltage pulse starts with a slow decrease followed by a rapid drop and leveling off at a low steady state value. This behavior prompted the interpretation of initial delay and fast transition as due to two different processes. Here, we show that the entire transient including incubation time, transition time, and the final resistance values in TaOx-based switching can be explained by one process, namely, Joule heating with the rapid transition due to the thermal runaway. The time, which is required for the device in the conducting state to relax back to the stable high resistance one, is also consistent with the proposed mechanism.

  3. Boeing Displays Process Action team

    NASA Astrophysics Data System (ADS)

    Wright, R. Nick; Jacobsen, Alan R.

    2000-08-01

    Boeing uses Active Matrix Liquid Crystal Display (AMLCD) technology in a wide variety of its aerospace products, including military, commercial, and space applications. With the demise of Optical Imaging Systems (OIS) in September 1998, the source of on-shore custom AMLCD products has become very tenuous. Reliance on off-shore sources of AMLCD glass for aerospace products is also difficult when the average life of a display product is often less than one-tenth the 30 or more years expected from aerospace platforms. Boeing is addressing this problem through the development of a Displays Process Action Team that gathers input from all display users across the spectrum of our aircraft products. By consolidating requirements, developing common interface standards, working with our suppliers and constantly monitoring custom sources as well as commercially available products, Boeing is minimizing the impact (current and future) of the uncertain AMLCD avionics supply picture.

  4. The potential of disease management for neuromuscular hereditary disorders.

    PubMed

    Chouinard, Maud-Christine; Gagnon, Cynthia; Laberge, Luc; Tremblay, Carmen; Côté, Charlotte; Leclerc, Nadine; Mathieu, Jean

    2009-01-01

    Neuromuscular hereditary disorders require long-term multidisciplinary rehabilitation management. Although the need for coordinated healthcare management has long been recognized, most neuromuscular disorders are still lacking clinical guidelines about their long-term management and structured evaluation plan with associated services. One of the most prevalent adult-onset neuromuscular disorders, myotonic dystrophy type 1, generally presents several comorbidities and a variable clinical picture, making management a constant challenge. This article presents a healthcare follow-up plan and proposes a nursing case management within a disease management program as an innovative and promising approach. This disease management program and model consists of eight components including population identification processes, evidence-based practice guidelines, collaborative practice, patient self-management education, and process outcomes evaluation (Disease Management Association of America, 2004). It is believed to have the potential to significantly improve healthcare management for neuromuscular hereditary disorders and will prove useful to nurses delivering and organizing services for this population.

  5. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  6. [The temporal dimension of drugs: a sociological analysis based on a category key to the study of health-disease-care processes].

    PubMed

    Sánchez Antelo, Victoria

    2016-03-01

    The temporal dimensions that shape the senses and practices of men and women who are poly-consumers of psychoactive substances, 18-35 years of age, and living in the metropolitan area of Buenos Aires were analyzed. Using a qualitative approach, 29 individual in-depth interviews were carried out and then analyzed through a constant comparative analysis process between the categories generated from the data obtained and the theoretical concepts. From the analysis, practices and meanings emerge that regulate the diverse temporalities that underlie drug consumption: feelings related to body rhythms, periods between consumptions, the timing of phases of the life cycle, or unspecific temporalities that become an adequate "moment" for consumption. These practices require that particular attention be paid to time, as this enables the flexibility to consume without being a consumer, to use drugs without being addicted to them.

  7. Computational homogenisation for thermoviscoplasticity: application to thermally sprayed coatings

    NASA Astrophysics Data System (ADS)

    Berthelsen, Rolf; Denzer, Ralf; Oppermann, Philip; Menzel, Andreas

    2017-11-01

    Metal forming processes require wear-resistant tool surfaces in order to ensure a long life cycle of the expensive tools together with a constant high quality of the produced components. Thermal spraying is a relatively widely applied coating technique for the deposit of wear protection coatings. During these coating processes, heterogeneous coatings are deployed at high temperatures followed by quenching where residual stresses occur which strongly influence the performance of the coated tools. The objective of this article is to discuss and apply a thermo-mechanically coupled simulation framework which captures the heterogeneity of the deposited coating material. Therefore, a two-scale finite element framework for the solution of nonlinear thermo-mechanically coupled problems is elaborated and applied to the simulation of thermoviscoplastic material behaviour including nonlinear thermal softening in a geometrically linearised setting. The finite element framework and material model is demonstrated by means of numerical examples.

  8. Therapeutic friendliness and the development of therapeutic leverage by mental health nurses in community rehabilitation settings.

    PubMed

    Gardner, Andrew

    2010-01-01

    In a world dominated by technology and driven by fiscal policy emphasis, the therapeutic relationship as a healing modality is still a central theme to mental health nurses (MHN) in their everyday work. This research, as part of a PhD program, used a constructivist grounded theory approach to explore the process of therapeutic relationships and professional boundaries. The current paper outlines how therapeutic friendliness provides a connection for the therapeutic relationship to develop but in doing so requires a balancing of the therapeutic relationship and constant maintenance of the professional boundary. The authors also discuss how community mental health nurses (CMHN) invest in the therapeutic relationship in order to develop a therapeutic alliance and how the alliance between the CMHN and the client facilitates the use of therapeutic leverage applied by the CMHN as part of the therapeutic process.

  9. Initiation Capacity of a Specially Shaped Booster Pellet and Numerical Simulation of Its Initiation Process

    NASA Astrophysics Data System (ADS)

    Hu, Li-Shuang; Hu, Shuang-Qi; Cao, Xiong; Zhang, Jian-Ren

    2014-01-01

    The insensitive main charge explosive is creating new requirements for the booster pellet of detonation trains. The traditional cylindrical booster pellet has insufficient energy output to reliably initiate the insensitive main charge explosive. In this research, a concave spherical booster pellet was designed. The initiation capacity of the concave spherical booster pellet was studied using varied composition and axial steel dent methods. The initiation process of the concave spherical booster pellet was also simulated by ANSYS/LS-DYNA. The results showed that using a concave spherical booster allows a 42% reduction in the amount of explosive needed to match the initiation capacity of a conventional cylindrical booster of the same dimensions. With the other parameters kept constant, the initiation capacity of the concave spherical booster pellet increases with decreased cone angle and concave radius. The numerical simulation results are in good agreement with the experimental data.

  10. Optimization of the segmented method for optical compression and multiplexing system

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  11. Uniform lateral etching of tungsten in deep trenches utilizing reaction-limited NF3 plasma process

    NASA Astrophysics Data System (ADS)

    Kofuji, Naoyuki; Mori, Masahito; Nishida, Toshiaki

    2017-06-01

    The reaction-limited etching of tungsten (W) with NF3 plasma was performed in an attempt to achieve the uniform lateral etching of W in a deep trench, a capability required by manufacturing processes for three-dimensional NAND flash memory. Reaction-limited etching was found to be possible at high pressures without ion irradiation. An almost constant etching rate that showed no dependence on NF3 pressure was obtained. The effect of varying the wafer temperature was also examined. A higher wafer temperature reduced the threshold pressure for reaction-limited etching and also increased the etching rate in the reaction-limited region. Therefore, the control of the wafer temperature is crucial to controlling the etching amount by this method. We found that the uniform lateral etching of W was possible even in a deep trench where the F radical concentration was low.

  12. Configuration management issues and objectives for a real-time research flight test support facility

    NASA Technical Reports Server (NTRS)

    Yergensen, Stephen; Rhea, Donald C.

    1988-01-01

    Presented are some of the critical issues and objectives pertaining to configuration management for the NASA Western Aeronautical Test Range (WATR) of Ames Research Center. The primary mission of the WATR is to provide a capability for the conduct of aeronautical research flight test through real-time processing and display, tracking, and communications systems. In providing this capability, the WATR must maintain and enforce a configuration management plan which is independent of, but complimentary to, various research flight test project configuration management systems. A primary WATR objective is the continued development of generic research flight test project support capability, wherein the reliability of WATR support provided to all project users is a constant priority. Therefore, the processing of configuration change requests for specific research flight test project requirements must be evaluated within a perspective that maintains this primary objective.

  13. Pellicle transmission uniformity requirements

    NASA Astrophysics Data System (ADS)

    Brown, Thomas L.; Ito, Kunihiro

    1998-12-01

    Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.

  14. Doubly Fed Induction Generator in an Offshore Wind Power Plant Operated at Rated V/Hz: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, E.; Singh, M.; Gevorgian, V.

    2012-06-01

    This paper introduces the concept of constant Volt/Hz operation of offshore wind power plants. The deployment of offshore WPPs requires power transmission from the plant to the load center inland. Since this power transmission requires submarine cables, there is a need to use High-Voltage Direct Current transmission, which is economical for transmission distances longer than 50 kilometers. In the concept presented here, the onshore substation is operated at 60 Hz synced with the grid, and the offshore substation is operated at variable frequency and voltage, thus allowing the WPP to be operated at constant Volt/Hz.

  15. Limitations of the Conventional Phase Advance Method for Constant Power Operation of the Brushless DC Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawler, J.S.

    2001-10-29

    The brushless dc motor (BDCM) has high-power density and efficiency relative to other motor types. These properties make the BDCM well suited for applications in electric vehicles provided a method can be developed for driving the motor over the 4 to 6:1 constant power speed range (CPSR) required by such applications. The present state of the art for constant power operation of the BDCM is conventional phase advance (CPA) [1]. In this paper, we identify key limitations of CPA. It is shown that the CPA has effective control over the developed power but that the current magnitude is relatively insensitivemore » to power output and is inversely proportional to motor inductance. If the motor inductance is low, then the rms current at rated power and high speed may be several times larger than the current rating. The inductance required to maintain rms current within rating is derived analytically and is found to be large relative to that of BDCM designs using high-strength rare earth magnets. Th us, the CPA requires a BDCM with a large equivalent inductance.« less

  16. The Saturn management concept

    NASA Technical Reports Server (NTRS)

    Bilstein, R. E.

    1974-01-01

    Management of the Saturn launch vehicles was an evolutionary process, requiring constant interaction between NASA Headquarters, the Marshall Space Flight Center (particularly the Saturn 5 Program Office), and the various prime contractors. Successful Saturn management was a blend of the decades of experience of the von Braun team, management concepts from the Army, Navy, Air Force, and Government, and private industry. The Saturn 5 Program Office shared a unique relationship with the Apollo Program Office at NASA Headquarters. Much of the success of the Saturn 5 Program Office was based on its painstaking attention to detail, emphasis on individual responsibilities (backed up by comprehensive program element plans and management matrices), and a high degree of visibility as embodied in the Program Control Center.

  17. Ubiquitin facilitates a quality-control pathway that removes damaged chloroplasts

    DOE PAGES

    Woodson, Jesse D.; Joens, Matthew S.; Sinson, Andrew B.; ...

    2015-10-23

    Energy production by chloroplasts and mitochondria causes constant oxidative damage. A functioning photosynthetic cell requires quality-control mechanisms to turn over and degrade chloroplasts damaged by reactive oxygen species (ROS). Here in this study, we generated a conditionally lethal Arabidopsis mutant that accumulated excess protoporphyrin IX in the chloroplast and produced singlet oxygen. Damaged chloroplasts were subsequently ubiquitinated and selectively degraded. A genetic screen identified the plant U-box 4 (PUB4) E3 ubiquitin ligase as being necessary for this process. pub4-6 mutants had defects in stress adaptation and longevity. As a result, we have identified a signal that leads to the targetedmore » removal of ROS-overproducing chloroplasts.« less

  18. Carbon dioxide gas purification and analytical measurement for leading edge 193nm lithography

    NASA Astrophysics Data System (ADS)

    Riddle Vogt, Sarah; Landoni, Cristian; Applegarth, Chuck; Browning, Matt; Succi, Marco; Pirola, Simona; Macchi, Giorgio

    2015-03-01

    The use of purified carbon dioxide (CO2) has become a reality for leading edge 193 nm immersion lithography scanners. Traditionally, both dry and immersion 193 nm lithographic processes have constantly purged the optics stack with ultrahigh purity compressed dry air (UHPCDA). CO2 has been utilized for a similar purpose as UHPCDA. Airborne molecular contamniation (AMC) purification technologies and analytical measurement methods have been extensively developed to support the Lithography Tool Manufacturers purity requirements. This paper covers the analytical tests and characterizations carried out to assess impurity removal from 3.0 N CO2 (beverage grade) for its final utilization in 193 nm and EUV scanners.

  19. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  20. Burning of liquid pools in reduced gravity

    NASA Technical Reports Server (NTRS)

    Kanury, A. M.

    1977-01-01

    The existing literature on the combustion of liquid fuel pools is reviewed to identify the physical and chemical aspects which require an improved understanding. Among the pre-, trans- and post-ignition processes, a delineation was made of those which seem to uniquely benefit from studies in the essential environment offered by spacelab. The role played by the gravitational constant in analytical and experimental justifications was developed. The analytical justifications were based on hypotheses, models and dimensional analyses whereas the experimental justifications were based on an examination of the range of gravity and gravity-dependent variables possible in the earth-based laboratories. Some preliminary expositions into the questions of feasibility of the proposed spacelab experiment are also reported.

  1. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  2. I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison

    NASA Technical Reports Server (NTRS)

    Somawardhana, Ruwan

    2011-01-01

    CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.

  3. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  4. Ubiquitin facilitates a quality-control pathway that removes damaged chloroplasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodson, Jesse D.; Joens, Matthew S.; Sinson, Andrew B.

    Energy production by chloroplasts and mitochondria causes constant oxidative damage. A functioning photosynthetic cell requires quality-control mechanisms to turn over and degrade chloroplasts damaged by reactive oxygen species (ROS). Here in this study, we generated a conditionally lethal Arabidopsis mutant that accumulated excess protoporphyrin IX in the chloroplast and produced singlet oxygen. Damaged chloroplasts were subsequently ubiquitinated and selectively degraded. A genetic screen identified the plant U-box 4 (PUB4) E3 ubiquitin ligase as being necessary for this process. pub4-6 mutants had defects in stress adaptation and longevity. As a result, we have identified a signal that leads to the targetedmore » removal of ROS-overproducing chloroplasts.« less

  5. Membrane-trafficking sorting hubs: cooperation between PI4P and small GTPases at the trans-Golgi Network

    PubMed Central

    Santiago-Tirado, Felipe H.; Bretscher, Anthony

    2011-01-01

    Cell polarity in eukaryotes requires constant sorting, packaging, and transport of membrane-bound cargo within the cell. These processes occur in two sorting hubs: the recycling endosome for incoming material, and the trans-Golgi Network for outgoing. Phosphatidylinositol 3-phosphate and 4–5 phosphate are enriched at the endocytic and exocytic sorting hubs, respectively, where they act together with small GTPases to recruit factors to segregate cargo and regulate carrier formation and transport. In this review, we summarize the current understanding of how these lipids and GTPases directly regulate membrane trafficking, emphasizing the recent discoveries of phosphatidylinositol 4-phosphate functions at the trans-Golgi Network. PMID:21764313

  6. Multi-image mosaic with SIFT and vision measurement for microscale structures processed by femtosecond laser

    NASA Astrophysics Data System (ADS)

    Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding

    2018-01-01

    In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.

  7. Remote Sensing of Salinity: The Dielectric Constant of Sea Water

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.

    2011-01-01

    Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.

  8. A nonmonotonic dependence of standard rate constant on reorganization energy for heterogeneous electron transfer processes on electrode surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Weilin; Li Songtao; Zhou Xiaochun

    2006-05-07

    In the present work a nonmonotonic dependence of standard rate constant (k{sup 0}) on reorganization energy ({lambda}) was discovered qualitatively from electron transfer (Marcus-Hush-Levich) theory for heterogeneous electron transfer processes on electrode surface. It was found that the nonmonotonic dependence of k{sup 0} on {lambda} is another result, besides the disappearance of the famous Marcus inverted region, coming from the continuum of electronic states in electrode: with the increase of {lambda}, the states for both Process I and Process II ET processes all vary from nonadiabatic to adiabatic state continuously, and the {lambda} dependence of k{sup 0} for Process Imore » is monotonic thoroughly, while for Process II on electrode surface the {lambda} dependence of k{sup 0} could show a nonmonotonicity.« less

  9. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    PubMed

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  10. Continuous-flow free acid monitoring method and system

    DOEpatents

    Strain, J.E.; Ross, H.H.

    1980-01-11

    A free acid monitoring method and apparatus is provided for continuously measuring the excess acid present in a process stream. The disclosed monitoring system and method is based on the relationship of the partial pressure ratio of water and acid in equilibrium with an acid solution at constant temperature. A portion of the process stream is pumped into and flows through the monitor under the influence of gravity and back to the process stream. A continuous flowing sample is vaporized at a constant temperature and the vapor is subsequently condensed. Conductivity measurements of the condensate produces a nonlinear response function from which the free acid molarity of the sample process stream is determined.

  11. Continuous-flow free acid monitoring method and system

    DOEpatents

    Strain, James E.; Ross, Harley H.

    1981-01-01

    A free acid monitoring method and apparatus is provided for continuously measuring the excess acid present in a process stream. The disclosed monitoring system and method is based on the relationship of the partial pressure ratio of water and acid in equilibrium with an acid solution at constant temperature. A portion of the process stream is pumped into and flows through the monitor under the influence of gravity and back to the process stream. A continuous flowing sample is vaporized at a constant temperature and the vapor is subsequently condensed. Conductivity measurements of the condensate produces a nonlinear response function from which the free acid molarity of the sample process stream is determined.

  12. Determination of Hamaker constants of polymeric nanoparticles in organic solvents by asymmetrical flow field-flow fractionation.

    PubMed

    Noskov, Sergey; Scherer, Christian; Maskos, Michael

    2013-01-25

    Interaction forces between all objects are either of repulsive or attractive nature. Concerning attractive interactions, the determination of dispersion forces are of special interest since they appear in all colloidal systems and have a crucial influence on the properties and processes in these systems. One possibility to link theory and experiment is the description of the London-Van der Waals forces in terms of the Hamaker constant, which leads to the challenging problem of calculating the van der Waals interaction energies between colloidal particles. Hence, the determination of a Hamaker constant for a given material is needed when interfacial phenomena such as adhesion are discussed in terms of the total potential energy between particles and substrates. In this work, the asymmetrical flow field-flow fractionation (AF-FFF) in combination with a Newton algorithm based iteration process was used for the determination of Hamaker constants of different nanoparticles in toluene. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Photothermal damage is correlated to the delivery rate of time-integrated temperature

    NASA Astrophysics Data System (ADS)

    Denton, Michael L.; Noojin, Gary D.; Gamboa, B. Giovanna; Ahmed, Elharith M.; Rockwell, Benjamin A.

    2016-03-01

    Photothermal damage rate processes in biological tissues are usually characterized by a kinetics approach. This stems from experimental data that show how the transformation of a specified biological property of cells or biomolecule (plating efficiency for viability, change in birefringence, tensile strength, etc.) is dependent upon both time and temperature. However, kinetic methods require determination of kinetic rate constants and knowledge of substrate or product concentrations during the reaction. To better understand photothermal damage processes we have identified temperature histories of cultured retinal cells receiving minimum lethal thermal doses for a variety of laser and culture parameters. These "threshold" temperature histories are of interest because they inherently contain information regarding the fundamental thermal dose requirements for damage in individual cells. We introduce the notion of time-integrated temperature (Tint) as an accumulated thermal dose (ATD) with units of °C s. Damaging photothermal exposure raises the rate of ATD accumulation from that of the ambient (e.g. 37 °C) to one that correlates with cell death (e.g. 52 °C). The degree of rapid increase in ATD (ΔATD) during photothermal exposure depends strongly on the laser exposure duration and the ambient temperature.

  14. First LHCb measurement with data from the LHC Run 2

    NASA Astrophysics Data System (ADS)

    Anderlini, L.; Amerio, S.

    2017-01-01

    LHCb has recently introduced a novel real-time detector alignment and calibration strategy for the Run 2. Data collected at the start of each LHC fill are processed in few minutes and used to update the alignment. On the other hand, the calibration constants will be evaluated for each run of data taking. An increase in the CPU and disk capacity of the event filter farm, combined with improvements to the reconstruction software, allow for efficient, exclusive selections already in the first stage of the High Level Trigger (HLT1), while the second stage, HLT2, performs complete, offline-quality, event reconstruction. In Run 2, LHCb will collect the largest data sample of charm mesons ever recorded. Novel data processing and analysis techniques are required to maximise the physics potential of this data sample with the available computing resources, taking into account data preservation constraints. In this write-up, we describe the full analysis chain used to obtain important results analysing the data collected in proton-proton collisions in 2015, such as the J/ψ and open charm production cross-sections, and consider the further steps required to obtain real-time results after the LHCb upgrade.

  15. [The basis of modern technologies in management of health care system].

    PubMed

    Nemytin, Iu V

    2014-12-01

    For the development of national heaIth care it is required to implement modern and effective methods and forms of governance. It is necessary to clearly identify transition to process management followed by an introduction of quality management care. It is necessary to create a complete version of the three-level health care system based on the integration into the system "Clinic - Hospital - Rehabilitation", which will ensure resource conservation in general throughout the industry. The most important task is purposeful comprehensive management training for health care--statesmen who have the potential ability to manage. The leader must possess all forms of management and apply them on a scientific basis. Standards and other tools of health management should constantly improve. Standards should be a teaching tool and help to improve the quality and effectiveness of treatment processes, the transition to the single-channel financing--the most advanced form of payment for the medical assistance. This type of financing requires managers to new management approaches, knowledge of business economics. One of the breakthrough objectives is the creation of a new type of health care organizations, which as lead locomotives for a rest.

  16. Fractional derivatives in the diffusion process in heterogeneous systems: The case of transdermal patches.

    PubMed

    Caputo, Michele; Cametti, Cesare

    2017-09-01

    In this note, we present a simple mathematical model of drug delivery through transdermal patches by introducing a memory formalism in the classical Fick diffusion equation based on the fractional derivative. This approach is developed in the case of a medicated adhesive patch placed on the skin to deliver a time released dose of medication through the skin towards the bloodstream.The main resistance to drug transport across the skin resides in the diffusion through its outermost layer (the stratum corneum). Due to the complicated architecture of this region, a model based on a constant diffusivity in a steady-state condition results in too simplistic assumptions and more refined models are required.The introduction of a memory formalism in the diffusion process, where diffusion parameters depend at a certain time or position on what happens at preceeding times, meets this requirement and allows a significantly better description of the experimental results.The present model may be useful not only for analyzing the rate of skin permeation but also for predicting the drug concentration after transdermal drug delivery depending on the diffusion characteristics of the patch (its thickness and pseudo-diffusion coefficient). Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Demonstration of a Large-Scale Tank Assembly via Circumferential Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Jones, Clyde S.; Adams, Glynn; Colligan, Kevin

    2000-01-01

    A collaborative effort between NASA/Marshall Space Flight Center and the Michoud Unit of Lockheed Martin Space Systems Company was undertaken to demonstrate assembly of a large-scale aluminum tank using circumferential friction stir welds. The hardware used to complete this demonstration was fabricated as a study of near-net- shape technologies. The tooling used to complete this demonstration was originally designed for assembly of a tank using fusion weld processes. This presentation describes the modifications and additions that were made to the existing fusion welding tools required to accommodate circumferential friction stir welding, as well as the process used to assemble the tank. The tooling modifications include design, fabrication and installation of several components. The most significant components include a friction stir weld unit with adjustable pin length capabilities, a continuous internal anvil for 'open' circumferential welds, a continuous closeout anvil, clamping systems, an external reaction system and the control system required to conduct the friction stir welds and integrate the operation of the tool. The demonstration was intended as a development task. The experience gained during each circumferential weld was applied to improve subsequent welds. Both constant and tapered thickness 14-foot diameter circumferential welds were successfully demonstrated.

  18. Mitochondrial fragmentation in excitotoxicity requires ROCK activation.

    PubMed

    Martorell-Riera, Alejandro; Segarra-Mondejar, Marc; Reina, Manuel; Martínez-Estrada, Ofelia M; Soriano, Francesc X

    2015-01-01

    Mitochondria morphology constantly changes through fission and fusion processes that regulate mitochondrial function, and it therefore plays a prominent role in cellular homeostasis. Cell death progression is associated with mitochondrial fission. Fission is mediated by the mainly cytoplasmic Drp1, which is activated by different post-translational modifications and recruited to mitochondria to perform its function. Our research and other studies have shown that in the early moments of excitotoxic insult Drp1 must be nitrosylated to mediate mitochondrial fragmentation in neurons. Nonetheless, mitochondrial fission is a multistep process in which filamentous actin assembly/disassembly and myosin-mediated mitochondrial constriction play prominent roles. Here we establish that in addition to nitric oxide production, excitotoxicity-induced mitochondrial fragmentation also requires activation of the actomyosin regulator ROCK. Although ROCK1 has been shown to phosphorylate and activate Drp1, experiments using phosphor-mutant forms of Drp1 in primary cortical neurons indicate that in excitotoxic conditions, ROCK does not act directly on Drp1 to mediate fission, but may act on the actomyosin complex. Thus, these data indicate that a wider range of signaling pathways than those that target Drp1 are amenable to be inhibited to prevent mitochondrial fragmentation as therapeutic option.

  19. Cyclically optimized electrochemical processes

    NASA Astrophysics Data System (ADS)

    Ruedisueli, Robert Louis

    It has been frequently observed in experiment and industry practice that electrochemical processes (deposition, dissolution, fuel cells) operated in an intermittent or cyclic (AC) mode show improvements in efficiency and/or quality and yield over their steady (DC) mode of operation. Whether rationally invoked by design or empirically tuned-in, the optimal operating frequency and duty cycle is dependent upon the dominant relaxation time constant for the process in question. The electrochemical relaxation time constant is a function of: double-layer and reaction intermediary pseudo-capacitances, ion (charge) transport via electrical migration (mobility), and diffusion across a concentration gradient to electrode surface reaction sites where charge transfer and species incorporation or elimination occurs. The rate determining step dominates the time constant for the reaction or process. Electrochemical impedance spectroscopy (EIS) and piezoelectric crystal electrode (PCE) response analysis have proven to be useful tools in the study and identification of reaction mechanisms. This work explains and demonstrates with the electro-deposition of copper the application of EIS and PCE measurement and analysis to the selection of an optimum cyclic operating schedule, an optimum driving frequency for efficient, sustained cyclic (pulsed) operation.

  20. Fabrication of amorphous InGaZnO thin-film transistor with solution processed SrZrO3 gate insulator

    NASA Astrophysics Data System (ADS)

    Takahashi, Takanori; Oikawa, Kento; Hoga, Takeshi; Uraoka, Yukiharu; Uchiyama, Kiyoshi

    2017-10-01

    In this paper, we describe a method of fabrication of thin film transistors (TFTs) with high dielectric constant (high-k) gate insulator by a solution deposition. We chose a solution processed SrZrO3 as a gate insulator material, which possesses a high dielectric constant of 21 with smooth surface. The IGZO-TFT with solution processed SrZrO3 showed good switching property and enough saturation features, i.e. field effect mobility of 1.7cm2/Vs, threshold voltage of 4.8V, sub-threshold swing of 147mV/decade, and on/off ratio of 2.3×107. Comparing to the TFTs with conventional SiO2 gate insulator, the sub-threshold swing was improved by smooth surface and high field effect due to the high dielectric constant of SrZrO3. These results clearly showed that use of solution processed high-k SrZrO3 gate insulator could improve sub-threshold swing. In addition, the residual carbon originated from organic precursors makes TFT performances degraded.

  1. Enhanced sonochemical degradation of azure B dye by the electroFenton process.

    PubMed

    Martínez, Susana Silva; Uribe, Edgar Velasco

    2012-01-01

    The degradation of azure B dye (C15H16ClN3S; AB) has been studied by Fenton, sonolysis and sono-electroFenton processes employing ultrasound at 23 kHz and the electrogeneration of H2O2 at the reticulated vitreous carbon electrode. It was found that the dye degradation followed apparent first-order kinetics in all the degradation processes tested. The rate constant was affected by both the pH of the solution and initial concentration of Fe2+, with the highest degradation obtained at pH between 2.6 and 3. The first-order rate constant decreased in the following order: sono-electroFenton>Fenton>sonolysis. The rate constant for AB degradation by sono-electroFenton is ∼10-fold that of sonolysis and ∼2-fold the one obtained by Fenton under silent conditions. The chemical oxygen demand was abated ∼68% and ∼85% by Fenton and sono-electroFenton respectively, achieving AB concentration removal over 90% with both processes. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Influence of exchange group of modified glycidyl methacrylate polymer on phenol removal: A study by batch and continuous flow processes.

    PubMed

    Aversa, Thiago Muza; da Silva, Carla Michele Frota; da Rocha, Paulo Cristiano Silva; Lucas, Elizabete Fernandes

    2016-11-01

    Contamination of water by phenol is potentially a serious problem due to its high toxicity and its acid character. In this way some treatment process to remove or reduce the phenol concentration before contaminated water disposal on the environment is required. Currently, phenol can be removed by charcoal adsorption, but this process does not allow easy regeneration of the adsorbent. In contrast, polymeric resins are easily regenerated and can be reused in others cycles of adsorption process. In this work, the interaction of phenol with two polymeric resins was investigated, one of them containing a weakly basic anionic exchange group (GD-DEA) and the other, a strongly basic group (GD-QUAT). Both ion exchange resins were obtained through chemical modifications from a base porous resin composed of glycidyl methacrylate (GMA) and divinyl benzene (DVB). Evaluation tests with resins were carried out with 30 mg/L of phenol in water solution, at pH 6 and 10, employing two distinct processes: (i) batch, to evaluate the effect of temperature, and (ii) continuous flow, to assess the breakthrough of the resins. Batch tests revealed that the systems did not follow the model proposed by Langmuir due to the negative values obtained for the constant b and for the maximum adsorption capacity, Q0. However, satisfactory results for the constants KF and n allowed assuming that the behavior of systems followed the Freundlich model, leading to the conclusion that resin GD-DEA had the best interaction with the phenol when in a solution having pH 10 (phenoxide ions). The continuous flow tests corroborated this conclusion since the performance of GD-DEA in removing phenol was also best at pH 10, indicating that the greater availability of the electron pair in the resin with the weakly basic donor group contributed to enhance the resin's interaction with the phenoxide ions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Synchrotron radiation x-ray photoelectron spectroscopy study on the interface chemistry of high-k PrxAl2-xO3 (x=0-2) dielectrics on TiN for dynamic random access memory applications

    NASA Astrophysics Data System (ADS)

    Schroeder, T.; Lupina, G.; Sohal, R.; Lippert, G.; Wenger, Ch.; Seifarth, O.; Tallarida, M.; Schmeisser, D.

    2007-07-01

    Engineered dielectrics combined with compatible metal electrodes are important materials science approaches to scale three-dimensional trench dynamic random access memory (DRAM) cells. Highly insulating dielectrics with high dielectric constants were engineered in this study on TiN metal electrodes by partly substituting Al in the wide band gap insulator Al2O3 by Pr cations. High quality PrAlO3 metal-insulator-metal capacitors were processed with a dielectric constant of 19, three times higher than in the case of Al2O3 reference cells. As a parasitic low dielectric constant interface layer between PrAlO3 and TiN limits the total performance gain, a systematic nondestructive synchrotron x-ray photoelectron spectroscopy study on the interface chemistry of PrxAl2-xO3 (x =0-2) dielectrics on TiN layers was applied to unveil its chemical origin. The interface layer results from the decreasing chemical reactivity of PrxAl2-xO3 dielectrics with increasing Pr content x to reduce native Ti oxide compounds present on unprotected TiN films. Accordingly, PrAlO3 based DRAM capacitors require strict control of the surface chemistry of the TiN electrode, a parameter furthermore of importance to engineer the band offsets of PrxAl2-xO3/TiN heterojunctions.

  4. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  5. Rethinking the connection between working memory and language impairment.

    PubMed

    Archibald, Lisa M D; Harder Griebeling, Katherine

    2016-05-01

    Working memory deficits have been found for children with specific language impairment (SLI) on tasks imposing increasing short-term memory load with or without additional, consistent (and simple) processing load. To examine the processing function of working memory in children with low language (LL) by employing tasks imposing increasing processing loads with constant storage demands individually adjusted based on each participant's short-term memory capacity. School-age groups with LL (n = 17) and typical language with either average (n = 28) or above-average nonverbal intelligence (n = 15) completed complex working memory-span tasks varying processing load while keeping storage demands constant, varying storage demands while keeping processing load constant, simple storage-span tasks, and measures of language and nonverbal intelligence. Teachers completed questionnaires about cognition and learning. Significantly lower scores were found for the LL than either matched group on storage-based tasks, but no group differences were found on the tasks varying processing load. Teachers' ratings of oral expression and mathematics abilities discriminated those who did or did not complete the most challenging cognitive tasks. The results implicate a deficit in the phonological storage but not in the central executive component of working memory for children with LL. Teacher ratings may reveal personality traits related to perseverance of effort in cognitive research. © 2015 Royal College of Speech and Language Therapists.

  6. Uniform spatial distribution of collagen fibril radii within tendon implies local activation of pC-collagen at individual fibrils

    NASA Astrophysics Data System (ADS)

    Rutenberg, Andrew D.; Brown, Aidan I.; Kreplak, Laurent

    2016-08-01

    Collagen fibril cross-sectional radii show no systematic variation between the interior and the periphery of fibril bundles, indicating an effectively constant rate of collagen incorporation into fibrils throughout the bundle. Such spatially homogeneous incorporation constrains the extracellular diffusion of collagen precursors from sources at the bundle boundary to sinks at the growing fibrils. With a coarse-grained diffusion equation we determine stringent bounds, using parameters extracted from published experimental measurements of tendon development. From the lack of new fibril formation after birth, we further require that the concentration of diffusing precursors stays below the critical concentration for fibril nucleation. We find that the combination of the diffusive bound, which requires larger concentrations to ensure homogeneous fibril radii, and lack of nucleation, which requires lower concentrations, is only marginally consistent with fully processed collagen using conservative bounds. More realistic bounds may leave no consistent concentrations. Therefore, we propose that unprocessed pC-collagen diffuses from the bundle periphery followed by local C-proteinase activity and subsequent collagen incorporation at each fibril. We suggest that C-proteinase is localized within bundles, at fibril surfaces, during radial fibrillar growth. The much greater critical concentration of pC-collagen, as compared to fully processed collagen, then provides broad consistency between homogeneous fibril radii and the lack of fibril nucleation during fibril growth.

  7. Change management methodologies trained for automotive infotainment projects

    NASA Astrophysics Data System (ADS)

    Prostean, G.; Volker, S.; Hutanu, A.

    2017-01-01

    An Automotive Electronic Control Units (ECU) development project embedded within a car Environment is constantly under attack of a continuous flow of modifications of specifications throughout the life cycle. Root causes for those modifications are for instance simply software or hardware implementation errors or requirement changes to satisfy the forthcoming demands of the market to ensure the later commercial success. It is unavoidable that from the very beginning until the end of the project “requirement changes” will “expose” the agreed objectives defined by contract specifications, which are product features, budget, schedule and quality. The key discussions will focus upon an automotive radio-navigation (infotainment) unit, which challenges aftermarket devises such as smart phones. This competition stresses especially current used automotive development processes, which are fit into a 4 Year car development (introduction) cycle against a one-year update cycle of a smart phone. The research will focus the investigation of possible impacts of changes during all phases of the project: the Concept-Validation, Development and Debugging-Phase. Building a thorough understanding of prospective threats is of paramount importance in order to establish the adequate project management process to handle requirement changes. Personal automotive development experiences and Literature review of change- and configuration management software development methodologies led the authors to new conceptual models, which integrates into the structure of traditional development models used in automotive projects, more concretely of radio-navigation projects.

  8. Action Video Gaming and Cognitive Control: Playing First Person Shooter Games Is Associated with Improved Action Cascading but Not Inhibition

    PubMed Central

    Steenbergen, Laura; Sellaro, Roberta; Stock, Ann-Kathrin; Beste, Christian; Colzato, Lorenza S.

    2015-01-01

    There is a constantly growing interest in developing efficient methods to enhance cognitive functioning and/or to ameliorate cognitive deficits. One particular line of research focuses on the possibly cognitive enhancing effects that action video game (AVG) playing may have on game players. Interestingly, AVGs, especially first person shooter games, require gamers to develop different action control strategies to rapidly react to fast moving visual and auditory stimuli, and to flexibly adapt their behaviour to the ever-changing context. This study investigated whether and to what extent experience with such videogames is associated with enhanced performance on cognitive control tasks that require similar abilities. Experienced action videogame-players (AVGPs) and individuals with little to no videogame experience (NVGPs) performed a stop-change paradigm that provides a relatively well-established diagnostic measure of action cascading and response inhibition. Replicating previous findings, AVGPs showed higher efficiency in response execution, but not improved response inhibition (i.e. inhibitory control), as compared to NVGPs. More importantly, compared to NVGPs, AVGPs showed enhanced action cascading processes when an interruption (stop) and a change towards an alternative response were required simultaneously, as well as when such a change had to occur after the completion of the stop process. Our findings suggest that playing AVGs is associated with enhanced action cascading and multi-component behaviour without affecting inhibitory control. PMID:26655929

  9. Action Video Gaming and Cognitive Control: Playing First Person Shooter Games Is Associated with Improved Action Cascading but Not Inhibition.

    PubMed

    Steenbergen, Laura; Sellaro, Roberta; Stock, Ann-Kathrin; Beste, Christian; Colzato, Lorenza S

    2015-01-01

    There is a constantly growing interest in developing efficient methods to enhance cognitive functioning and/or to ameliorate cognitive deficits. One particular line of research focuses on the possibly cognitive enhancing effects that action video game (AVG) playing may have on game players. Interestingly, AVGs, especially first person shooter games, require gamers to develop different action control strategies to rapidly react to fast moving visual and auditory stimuli, and to flexibly adapt their behaviour to the ever-changing context. This study investigated whether and to what extent experience with such videogames is associated with enhanced performance on cognitive control tasks that require similar abilities. Experienced action videogame-players (AVGPs) and individuals with little to no videogame experience (NVGPs) performed a stop-change paradigm that provides a relatively well-established diagnostic measure of action cascading and response inhibition. Replicating previous findings, AVGPs showed higher efficiency in response execution, but not improved response inhibition (i.e. inhibitory control), as compared to NVGPs. More importantly, compared to NVGPs, AVGPs showed enhanced action cascading processes when an interruption (stop) and a change towards an alternative response were required simultaneously, as well as when such a change had to occur after the completion of the stop process. Our findings suggest that playing AVGs is associated with enhanced action cascading and multi-component behaviour without affecting inhibitory control.

  10. Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.

    PubMed

    Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C

    2010-12-01

    Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.

  11. When is the growth index constant?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polarski, David; Starobinsky, Alexei A.; Giacomini, Hector, E-mail: david.polarski@umontpellier.fr, E-mail: alstar@landau.ac.ru, E-mail: hector.giacomini@lmpt.univ-tours.fr

    The growth index γ is an interesting tool to assess the phenomenology of dark energy (DE) models, in particular of those beyond general relativity (GR). We investigate the possibility for DE models to allow for a constant γ during the entire matter and DE dominated stages. It is shown that if DE is described by quintessence (a scalar field minimally coupled to gravity), this behaviour of γ is excluded either because it would require a transition to a phantom behaviour at some finite moment of time, or, in the case of tracking DE at the matter dominated stage, because themore » relative matter density Ω {sub m} appears to be too small. An infinite number of solutions, with Ω {sub m} and γ both constant, are found with w {sub DE} = 0 corresponding to Einstein-de Sitter universes. For all modified gravity DE models satisfying G {sub eff} ≥ G , among them the f ( R ) DE models suggested in the literature, the condition to have a constant w {sub DE} is strongly violated at the present epoch. In contrast, DE tracking dust-like matter deep in the matter era, but with Ω {sub m} <1, requires G {sub eff} > G and an example is given using scalar-tensor gravity for a range of admissible values of γ. For constant w {sub DE} inside GR, departure from a quasi-constant value is limited until today. Even a large variation of w {sub DE} may not result in a clear signature in the change of γ. The change however is substantial in the future and the asymptotic value of γ is found while its slope with respect to Ω {sub m} (and with respect to z ) diverges and tends to −∞.« less

  12. Processing of Al2O3/SrTiO3/PDMS Composites With Low Dielectric Loss

    NASA Astrophysics Data System (ADS)

    Yao, J. L.; Guo, M. J.; Qi, Y. B.; Zhu, H. X.; Yi, R. Y.; Gao, L.

    2018-05-01

    Polydimethylsiloxane (PDMS) is widely used in the electrical and electronic industries due to its excellent electrical insulation and biocompatible characteristics. However, the dielectric constant of pure PDMS is very low which restricts its applications. Herein, we report a series of PDMS/Al2O3/strontium titanate (ST) composites with high dielectric constant and low loss prepared by a simple experimental method. The composites exhibit high dielectric constant (relative dielectric constant is 4) after the composites are coated with insulated Al2O3 particles, and the dielectric constant gets further improved for composites with ST particles (dielectric constant reaches 15.5); a lower dielectric loss (tanδ= 0.05) is also found at the same time which makes co-filler composites suitable for electrical insulation products, and makes the experimental method more interesting in modern teaching.

  13. 40 CFR 281.32 - General operating requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... constantly; (b) Where equipped with cathodic protection, be operated and maintained by a person with... 40 Protection of Environment 26 2010-07-01 2010-07-01 false General operating requirements. 281.32 Section 281.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES...

  14. 40 CFR 281.32 - General operating requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... constantly; (b) Where equipped with cathodic protection, be operated and maintained by a person with... 40 Protection of Environment 27 2011-07-01 2011-07-01 false General operating requirements. 281.32 Section 281.32 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES...

  15. Solving gas-processing problems. Part 8. Expansion processes, turboexpander efficiency vital for predicting liquid-recovery levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erbar, J.H.; Maddox, R.N.

    1981-07-06

    Expansion processes, using either Joule-Thomson or isentropic principles play an important role in the processing of natural gas streams for liquid recovery and/or hydrocarbon-dewpoint control. Constant-enthalpy expansion has been an integral part of gas processing schemes for many years. The constant entropy, or isentropic, process is more recent but has achieved wide-spread popularity. In typcial flow sheets for expansion processess, the expansion device is shown to be a value or choke. It also could be an expansion turbine to indicate an isentropic expansion. The expansion may be to lower pressure; or, in the case of turboexpansion, it could recover materialmore » or produce work. More frequently, the aim of the expansion is to produce low temperature and enhance liquid recovery.« less

  16. The value of mechanistic biophysical information for systems-level understanding of complex biological processes such as cytokinesis.

    PubMed

    Pollard, Thomas D

    2014-12-02

    This review illustrates the value of quantitative information including concentrations, kinetic constants and equilibrium constants in modeling and simulating complex biological processes. Although much has been learned about some biological systems without these parameter values, they greatly strengthen mechanistic accounts of dynamical systems. The analysis of muscle contraction is a classic example of the value of combining an inventory of the molecules, atomic structures of the molecules, kinetic constants for the reactions, reconstitutions with purified proteins and theoretical modeling to account for the contraction of whole muscles. A similar strategy is now being used to understand the mechanism of cytokinesis using fission yeast as a favorable model system. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. A Method to Estimate the Masses of Asymptotic Giant Branch Variable Stars

    NASA Astrophysics Data System (ADS)

    Takeuti, Mine; Nakagawa, Akiharu; Kurayama, Tomoharu; Honma, Mareki

    2013-06-01

    AGB variable stars are at the transient phase between low and high mass-loss rates; estimating the masses of these stars is necessary to study the evolutionary processes and mass-loss processes during the AGB stage. We applied the pulsation constant theoretically derived by Xiong and Deng (2007 MNRAS, 378, 1270) to 15 galactic AGB stars in order to estimate their masses. We found that using the pulsation constant is effective to estimate the mass of a star pulsating with two different pulsation modes, such as S Crt and RX Boo, which provides mass estimates comparable to theoretical results of AGB star evolution. We also extended the use of the pulsation constant to single-mode variables, and analyzed the properties of AGB stars related to their masses.

  18. On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains

    NASA Astrophysics Data System (ADS)

    de Wekker, Stephan F. J.; Whiteman, C. David

    2006-06-01

    Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.

  19. [Effects of different drying methods on processing performance and quality in bulbus of Tulipa edulis].

    PubMed

    Yang, Xiao-hua; Guo, Qiao-sheng; Zhu, Zai-biao; Chen, Jun; Miao, Yuan-yuan; Yang, Ying; Sun, Yuan

    2015-10-01

    Effects of different drying methods including sun drying, steamed, boiled, constant temperature drying (at 40, 50, 60 °C) on appearance, hardness, rehydration ratio, dry rate, moisture, total ash, extractive and polysaccharides contents were studied to provide the basis of standard processing method for Tulipa edulis bulbus. The results showed that the treatments of sun drying and 40 °C drying showed higher rehydration ratios, but lower dry rate, higher hardness, worse color, longer time and obvious distortion and shrinkage in comparison with other drying methods. The treatments of 60 °C constant temperature drying resulted in shorter drying time, lower water and higher polysaccharides content. Drying time is shorter and appearance quality is better in the treatment of steaming and boiling compared with other treatments, but the content of extractive and polysaccharides decreased significantly. The treatments of 50 °C constant temperature drying led to similar appearance quality of bulb to commercial bulb, and it resulted in lowest hardness and highest dry rate as well as higher rehydration ratio, extractive and polysaccharides content, moderate moisture and total ash contents among these treatments. Based on the results obtained, 50 °C constant temperature drying is the better way for the processing of T. edulis bulbus.

  20. Determination of kinetic and equilibrium parameters of the batch adsorption of Mn(II), Co(II), Ni(II) and Cu(II) from aqueous solution by black carrot (Daucus carota L.) residues.

    PubMed

    Güzel, Fuat; Yakut, Hakan; Topal, Giray

    2008-05-30

    In this study, the effect of temperature on the adsorption of Mn(II), Ni(II), Co(II) and Cu(II) from aqueous solution by modified carrot residues (MCR) was investigated. The equilibrium contact times of adsorption process for each heavy metals-MCR systems were determined. Kinetic data obtained for each heavy metal by MCR at different temperatures were applied to the Lagergren equation, and adsorption rate constants (kads) at these temperatures were determined. These rate constants related to the adsorption of heavy metal by MCR were applied to the Arrhenius equation, and activation energies (Ea) were determined. In addition, the isotherms for adsorption of each heavy metal by MCR at different temperatures were also determined. These isothermal data were applied to linear forms of isotherm equations that they fit the Langmuir adsorption isotherm, and the Langmuir constants (qm and b) were calculated. b constants determined at different temperatures were applied to thermodynamic equations, and thermodynamic parameters such as enthalpy (Delta H), free energy (Delta G), and entropy (Delta S) were calculated and these values show that adsorption of heavy metal on MCR was an endothermic process and process of adsorption was favoured at high temperatures.

  1. Effect of reaction solvent on hydroxyapatite synthesis in sol-gel process

    NASA Astrophysics Data System (ADS)

    Nazeer, Muhammad Anwaar; Yilgor, Emel; Yagci, Mustafa Baris; Unal, Ugur; Yilgor, Iskender

    2017-12-01

    Synthesis of hydroxyapatite (HA) through sol-gel process in different solvent systems is reported. Calcium nitrate tetrahydrate (CNTH) and diammonium hydrogen phosphate (DAHP) were used as calcium and phosphorus precursors, respectively. Three different synthesis reactions were carried out by changing the solvent media, while keeping all other process parameters constant. A measure of 0.5 M aqueous DAHP solution was used in all reactions while CNTH was dissolved in distilled water, tetrahydrofuran (THF) and N,N-dimethylformamide (DMF) at a concentration of 0.5 M. Ammonia solution (28-30%) was used to maintain the pH of the reaction mixtures in the 10-12 range. All reactions were carried out at 40 ± 2°C for 4 h. Upon completion of the reactions, products were filtered, washed and calcined at 500°C for 2 h. It was clearly demonstrated through various techniques that the dielectric constant and polarity of the solvent mixture strongly influence the chemical structure and morphological properties of calcium phosphate synthesized. Water-based reaction medium, with highest dielectric constant, mainly produced β-calcium pyrophosphate (β-CPF) with a minor amount of HA. DMF/water system yielded HA as the major phase with a very minor amount of β-CPF. THF/water solvent system with the lowest dielectric constant resulted in the formation of pure HA.

  2. Change in fibrinolytic activity under the influence of a constant magnetic field. [blood coagulation normilization in heart patients

    NASA Technical Reports Server (NTRS)

    Yepishina, S. G.

    1974-01-01

    The fibrinolytic activity of plasma changes under the influence of a constant magnetic field (CMF) with a strength of 250 or 2500 oersteds. CMF shows a tendency toward normalization of fibrinolytic processes in the presence of pathological disturbances in fibrinolysis activation.

  3. A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound

    NASA Astrophysics Data System (ADS)

    Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.

    2012-10-01

    A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.

  4. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    NASA Astrophysics Data System (ADS)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  5. Water as a matrix for life

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Pratt, Lawrence

    2006-01-01

    "Follow the water" is the canonical strategy in searching for life in the universe. Conventionally, discussion of this topic is focused on how solvent supports organic chemistry sufficiently rich to seed life. Perhaps more importantly, solvent must promote self-organization of organic matter into functional structures capable of responding to environmental changes. This process is based on non-covalent interactions. They are constantly formed and broken in response to internal and external stimuli. This requires that their strength must be properly tuned. If they were too weak, the system would exhibit undesired, uncontrolled response to natural fluctuations of physical and chemical parameters. If they were too strong kinetics of biological processes would be slow and energetics costly. Non-covalent interactions are strongly mediated by the solvent. Specifically, high dielectric solvents for life are needed for solubility of polar species and flexibility of biological structures stabilized by electrostatic interactions. Water exhibits a remarkable trait that it promotes solvophobic interactions between non-polar species, which are responsible for self-organization phenomena such as the formation of cellular boundary structures, and protein folding and aggregation. Unusual temperature dependence of hydrophobic interactions - they often become stronger as temperature increases - is a consequence of the temperature insensitivity of properties of the liquid water. This contributes to the existence of robust life over a wide temperature range. Water is not the only liquid with favorable properties for supporting life. Other pure liquids or their mixtures that have high dielectric constants and simultaneously support some level of self-organization will be discussed.

  6. FDA's misplaced priorities: premarket review under the Family Smoking Prevention and Tobacco Control Act.

    PubMed

    Jenson, Desmond; Lester, Joelle; Berman, Micah L

    2016-05-01

    Among other key objectives, the 2009 Family Smoking Prevention and Tobacco Control Act was designed to end an era of constant product manipulation by the tobacco industry that had led to more addictive and attractive products. The law requires new tobacco products to undergo premarket review by the US Food and Drug Administration (FDA) before they can be sold. To assess FDA's implementation of its premarket review authorities, we reviewed FDA actions on new product applications, publicly available data on industry applications to market new products, and related FDA guidance documents and public statements. We conclude that FDA has not implemented the premarket review process in a manner that prioritises the protection of public health. In particular, FDA has (1) prioritised the review of premarket applications that allow for the introduction of new tobacco products over the review of potentially non-compliant products that are already on the market; (2) misallocated resources by accommodating the industry's repeated submissions of deficient premarket applications and (3) weakened the premarket review process by allowing the tobacco industry to market new and modified products that have not completed the required review process. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    PubMed

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  8. Mass transport by diffusion

    NASA Technical Reports Server (NTRS)

    Baird, James K.

    1987-01-01

    For the purpose of determining diffusion coefficients as required for electrodeposition studies and other applications, a diaphragm cell and an isothermal water bath were constructed. the calibration of the system is discussed. On the basis of three calibration runs on the diaphram cell, researchers concluded that the cell constant beta equals 0.12 cm -2 . Other calibration runs in progress should permit the cell constant to be determined with an accuracy of one percent.

  9. Nonhydrodynamic Characteristics of the Oscillating Screen Viscometer

    NASA Technical Reports Server (NTRS)

    Berg, Robert F.; Moldover, Michael R.

    1993-01-01

    Extraction of the viscosity from the oscillating screen's response function requires knowledge of it resonance frequency omega(sub 0) and of the prefactor k(sub tr)/k(theta), where k(sub tr) is a transducer coefficient and k(sub theta) is the torsion spring constant. The determination of these parameters is described. The effect of a possible anomaly in the dielectric constant near the critical point of xenon will be negligible.

  10. The DTIC Review. Hybrid and Electronic Vehicles. Volume 4. Number 1, June 1998.

    DTIC Science & Technology

    1998-06-01

    ARGONNE NATIONAL LAB KIRTLAND AFB, NM IL (U) Constant-Thrust Orbit-Raising Transfer Charts. • (U) Dynamics and Controls in Maglev Systems DESCRIPTIVE...method to levitated ( MAGLEV ) ground transportation systems has generate minimum-fuel trajectories between coplanar important consequences for safety...satellite designers to control systems must be considered if MAGLEV systems assess preliminary fuel requirements for constant-thrust are to be economically

  11. Periodic solutions for one dimensional wave equation with bounded nonlinearity

    NASA Astrophysics Data System (ADS)

    Ji, Shuguan

    2018-05-01

    This paper is concerned with the periodic solutions for the one dimensional nonlinear wave equation with either constant or variable coefficients. The constant coefficient model corresponds to the classical wave equation, while the variable coefficient model arises from the forced vibrations of a nonhomogeneous string and the propagation of seismic waves in nonisotropic media. For finding the periodic solutions of variable coefficient wave equation, it is usually required that the coefficient u (x) satisfies ess infηu (x) > 0 with ηu (x) = 1/2 u″/u - 1/4 (u‧/u)2, which actually excludes the classical constant coefficient model. For the case ηu (x) = 0, it is indicated to remain an open problem by Barbu and Pavel (1997) [6]. In this work, for the periods having the form T = 2p-1/q (p , q are positive integers) and some types of boundary value conditions, we find some fundamental properties for the wave operator with either constant or variable coefficients. Based on these properties, we obtain the existence of periodic solutions when the nonlinearity is monotone and bounded. Such nonlinearity may cross multiple eigenvalues of the corresponding wave operator. In particular, we do not require the condition ess infηu (x) > 0.

  12. Automated plasma control with optical emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Ward, P. P.

    Plasma etching and desmear processes for printed wiring board (PWB) manufacture are difficult to predict and control. Non-uniformity of most plasma processes and sensitivity to environmental changes make it difficult to maintain process stability from day to day. To assure plasma process performance, weight loss coupons or post-plasma destructive testing must be used. These techniques are not real-time methods however, and do not allow for immediate diagnosis and process correction. These tests often require scrapping some fraction of a batch to insure the integrity of the rest. Since these tests verify a successful cycle with post-plasma diagnostics, poor test results often determine that a batch is substandard and the resulting parts unusable. These tests are a costly part of the overall fabrication cost. A more efficient method of testing would allow for constant monitoring of plasma conditions and process control. Process anomalies should be detected and corrected before the parts being treated are damaged. Real time monitoring would allow for instantaneous corrections. Multiple site monitoring would allow for process mapping within one system or simultaneous monitoring of multiple systems. Optical emission spectroscopy conducted external to the plasma apparatus would allow for this sort of multifunctional analysis without perturbing the glow discharge. In this paper, optical emission spectroscopy for non-intrusive, in situ process control will be explored along with applications of this technique to for process control, failure analysis and endpoint determination in PWB manufacture.

  13. Comparative analysis of the photocatalytic reduction of drinking water oxoanions using titanium dioxide.

    PubMed

    Marks, Randal; Yang, Ting; Westerhoff, Paul; Doudrick, Kyle

    2016-11-01

    Regulated oxidized pollutants in drinking water can have significant health effects, resulting in the need for ancillary treatment processes. Oxoanions (e.g., nitrate) are one important class of oxidized inorganic ions. Ion exchange and reverse osmosis are often used treatment processes for oxoanions, but these separation processes leave behind a concentrated waste product that still requires treatment or disposal. Photocatalysis has emerged as a sustainable treatment technology capable of catalytically reducing oxoanions directly to innocuous byproducts. Compared with the large volume of knowledge available for photocatalytic oxidation, very little knowledge exists regarding photocatalytic reduction of oxoanion pollutants. This study investigates the reduction of various oxoanions of concern in drinking water (nitrate, nitrite, bromate, perchlorate, chlorate, chlorite, chromate) using a commercial titanium dioxide photocatalyst and a polychromatic light source. Results showed that oxoanions were readily reduced under acidic conditions in the presence of formate, which served as a hole scavenger, with the first-order rate decreasing as follows: bromate > nitrite > chlorate > nitrate > dichromate > perchlorate, corresponding to rate constants of 0.33, 0.080, 0.052, 0.0074, 0.0041, and 0 cm 2 /photons × 10 18 , respectively. Only bromate and nitrite were reduced at neutral pH, with substantially lower rate constants of 0.034 and 0.0021 cm 2 /photons × 10 18 , respectively. No direct relationship between oxoanion physicochemical properties, including electronegativity of central atom, internal bond strength, and polarizability was discovered. However, observations presented herein suggest the presence of kinetic barriers unique to each oxoanion and provides a framework for investigating photocatalytic reduction mechanisms of oxoanions in order to design better photocatalysts and optimize treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Porous calcium polyphosphate bone substitutes: additive manufacturing versus conventional gravity sinter processing-effect on structure and mechanical properties.

    PubMed

    Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert

    2014-02-01

    Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.

  15. Microfabricated microengine with constant rotation rate

    DOEpatents

    Romero, Louis A.; Dickey, Fred M.

    1999-01-01

    A microengine uses two synchronized linear actuators as a power source and converts oscillatory motion from the actuators into constant rotational motion via direct linkage connection to an output gear or wheel. The microengine provides output in the form of a continuously rotating output gear that is capable of delivering drive torque at a constant rotation to a micromechanism. The output gear can have gear teeth on its outer perimeter for directly contacting a micromechanism requiring mechanical power. The gear is retained by a retaining means which allows said gear to rotate freely. The microengine is microfabricated of polysilicon on one wafer using surface micromachining batch fabrication.

  16. Method of Conjugate Radii for Solving Linear and Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1999-01-01

    This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.

  17. Measurement of labile copper in wine by medium exchange stripping potentiometry utilising screen printed carbon electrodes.

    PubMed

    Clark, Andrew C; Kontoudakis, Nikolaos; Barril, Celia; Schmidtke, Leigh M; Scollary, Geoffrey R

    2016-07-01

    The presence of copper in wine is known to impact the reductive, oxidative and colloidal stability of wine, and techniques enabling measurement of different forms of copper in wine are of particular interest in understanding these spoilage processes. Electrochemical stripping techniques developed to date require significant pretreatment of wine, potentially disturbing the copper binding equilibria. A thin mercury film on a screen printed carbon electrode was utilised in a flow system for the direct analysis of labile copper in red and white wine by constant current stripping potentiometry with medium exchange. Under the optimised conditions, including an enrichment time of 500s and constant current of 1.0μA, the response range was linear from 0.015 to 0.200mg/L. The analysis of 52 red and white wines showed that this technique generally provided lower labile copper concentrations than reported for batch measurement by related techniques. Studies in a model system and in finished wines showed that the copper sulfide was not measured as labile copper, and that loss of hydrogen sulfide via volatilisation induced an increase in labile copper within the model wine system. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Regulation of nitrogen uptake and assimilation: Effects of nitrogen source and root-zone and aerial environment on growth and productivity of soybean

    NASA Technical Reports Server (NTRS)

    Raper, C. David, Jr.

    1994-01-01

    The interdependence of root and shoot growth produces a functional equilibrium as described in quantitative terms by numerous authors. It was noted that bean seedlings grown in a constant environment tended to have a constant distribution pattern of dry matter between roots and leaves characteristic of the set of environmental conditions. Disturbing equilibrium resulted in a change in relative growth of roots and leaves until the original ratio was restored. To define a physiological basis for regulation of nitrogen uptake within the balance between root and shoot activities, the authors combined a partioning scheme and a utilization priority assumption in which: (1) all carbon enters the plant through photosynthesis in leaves and all nitrogen enters the plant through active uptake by roots, (2) nitrogen uptake by roots and secretion into the xylem for transport to the shoots are active processes, (3) availability of exogenous nitrogen determines concentration of soluble carbohydrates within the roots, (4) leaves are a source and a sink for carbohydrates, and (5) the requirement for nitrogen by leaf growth is proportionally greater during initiation and early expansion than during later expansion.

  19. Capillary Flow in an Interior Corner

    NASA Technical Reports Server (NTRS)

    Weislogel, Mark Milton

    1996-01-01

    The design of fluids management processes in the low-gravity environment of space requires an accurate model and description of capillarity-controlled flow in containers of irregular geometry. Here we consider the capillary rise of a fluid along an interior corner of a container following a rapid reduction in gravity. The analytical portion of the work presents an asymptotic formulation in the limit of a slender fluid column, slight surface curvature along the corner, small inertia, and low gravity. New similarity solutions are found and a list of closed form expressions is provided for flow rate and column length. In particular, it is found that the flow is proportional to t(exp 1/2) for a constant height boundary condition, t(exp 2/5) for a spreading drop, and t(exp 3/5) for constant flow. In the experimental portion of the work, measurements from a 2.2s drop tower are reported. An extensive data set, collected over a previously unexplored range of flow parameters, includes estimates of repeatability and accuracy, the role of inertia and column slenderness, and the effects of corner angle, container geometry, and fluid properties. Comprehensive comparisons are made which illustrate the applicability of the analytic results to low-g fluid systems design.

  20. Co-ensiling as a new technique for long-term storage of agro-industrial waste with low sugar content prior to anaerobic digestion.

    PubMed

    Hillion, Marie-Lou; Moscoviz, Roman; Trably, Eric; Leblanc, Yoann; Bernet, Nicolas; Torrijos, Michel; Escudié, Renaud

    2018-01-01

    Biodegradable wastes produced seasonally need an upstream storage, because of the requirement for a constant feeding of anaerobic digesters. In the present article, the potential of co-ensiling biodegradable agro-industrial waste (sugar beet leaves) and lignocellulosic agricultural residue (wheat straw) to obtain a mixture with low soluble sugar content was evaluated for long-term storage prior to anaerobic digestion. The aim is to store agro-industrial waste while pretreating lignocellulosic biomass. The dynamics of co-ensiling was evaluated in vacuum-packed bags at lab-scale during 180 days. Characterization of the reaction by-products and microbial communities showed a succession of metabolic pathways. Even though the low initial sugars content was not sufficient to lower the pH under 4.5 and avoid undesirable fermentations, the methane potential was not substantially impacted all along the experiment. No lignocellulosic damages were observed during the silage process. Overall, it was shown that co-ensiling was effective to store highly fermentable fresh waste evenly with low sugar content and offers new promising possibilities for constant long-term supply of industrial anaerobic digesters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Bulgeless dwarf galaxies and dark matter cores from supernova-driven outflows.

    PubMed

    Governato, F; Brook, C; Mayer, L; Brooks, A; Rhee, G; Wadsley, J; Jonsson, P; Willman, B; Stinson, G; Quinn, T; Madau, P

    2010-01-14

    For almost two decades the properties of 'dwarf' galaxies have challenged the cold dark matter (CDM) model of galaxy formation. Most observed dwarf galaxies consist of a rotating stellar disk embedded in a massive dark-matter halo with a near-constant-density core. Models based on the dominance of CDM, however, invariably form galaxies with dense spheroidal stellar bulges and steep central dark-matter profiles, because low-angular-momentum baryons and dark matter sink to the centres of galaxies through accretion and repeated mergers. Processes that decrease the central density of CDM halos have been identified, but have not yet reconciled theory with observations of present-day dwarfs. This failure is potentially catastrophic for the CDM model, possibly requiring a different dark-matter particle candidate. Here we report hydrodynamical simulations (in a framework assuming the presence of CDM and a cosmological constant) in which the inhomogeneous interstellar medium is resolved. Strong outflows from supernovae remove low-angular-momentum gas, which inhibits the formation of bulges and decreases the dark-matter density to less than half of what it would otherwise be within the central kiloparsec. The analogues of dwarf galaxies-bulgeless and with shallow central dark-matter profiles-arise naturally in these simulations.

  2. Research on Stabilization Properties of Inductive-Capacitive Transducers Based on Hybrid Electromagnetic Elements

    NASA Astrophysics Data System (ADS)

    Konesev, S. G.; Khazieva, R. T.; Kirllov, R. V.; Konev, A. A.

    2017-01-01

    Some electrical consumers (the charge system of storage capacitor, powerful pulse generators, electrothermal systems, gas-discharge lamps, electric ovens, plasma torches) require constant power consumption, while their resistance changes in the limited range. Current stabilization systems (CSS) with inductive-capacitive transducers (ICT) provide constant power, when the load resistance changes over a wide range and increaseы the efficiency of high-power loads’ power supplies. ICT elements are selected according to the maximum load, which leads to exceeding a predetermined value of capacity. The paper suggests carrying load power by the ICT based on multifunction integrated electromagnetic components (MIEC) to reduce the predetermined capacity of ICT elements and CSS weights and dimensions. The authors developed and patented ICT based on MIEC that reduces the CSS weights and dimensions by reducing components number with the possibility of device’s electric energy transformation and resonance frequency changing. An ICT mathematical model was produced. The model determines the width of the load stabilization range. Electromagnetic processes study model was built with the MIEC integral parameters (full inductance of the electrical lead, total capacity, current of electrical lead). It shows independence of the load current from the load resistance for different ways of MIEC connection.

  3. Uric acid detection using uv-vis spectrometer

    NASA Astrophysics Data System (ADS)

    Norazmi, N.; Rasad, Z. R. Abdul; Mohamad, M.; Manap, H.

    2017-10-01

    The aim of this research is to detect uric acid (UA) concentration using Ultraviolet-Visible (UV-Vis) spectrometer in the Ultraviolet (UV) region. Absorption technique was proposed to detect different uric acid concentrations and its UV absorption wavelength. Current practices commonly take a lot of times or require complicated structures for the detection process. By this proposed spectroscopic technique, every concentration can be detected and interpreted into an absorbance value at a constant wavelength peak in the UV region. This is due to the chemical characteristics belong to the uric acid since it has a particular absorption cross-section, σ which can be calculated using Beer’s Lambert law formula. The detection performance was displayed using Spectrasuite sofware. It showed fast time response about 3 seconds. The experiment proved that the concentrations of uric acid were successfully detected using UV-Vis spectrometer at a constant absorption UV wavelength, 294.46 nm in a low time response. Even by an artificial sample of uric acid, it successfully displayed a close value as the ones reported with the use of the medical sample. It is applicable in the medical field and can be implemented in the future for earlier detection of abnormal concentration of uric acid.

  4. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    NASA Astrophysics Data System (ADS)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  5. Efficiency of the Inertia Friction Welding Process and Its Dependence on Process Parameters

    NASA Astrophysics Data System (ADS)

    Senkov, O. N.; Mahaffey, D. W.; Tung, D. J.; Zhang, W.; Semiatin, S. L.

    2017-07-01

    It has been widely assumed, but never proven, that the efficiency of the inertia friction welding (IFW) process is independent of process parameters and is relatively high, i.e., 70 to 95 pct. In the present work, the effect of IFW parameters on process efficiency was established. For this purpose, a series of IFW trials was conducted for the solid-state joining of two dissimilar nickel-base superalloys (LSHR and Mar-M247) using various combinations of initial kinetic energy ( i.e., the total weld energy, E o), initial flywheel angular velocity ( ω o), flywheel moment of inertia ( I), and axial compression force ( P). The kinetics of the conversion of the welding energy to heating of the faying sample surfaces ( i.e., the sample energy) vs parasitic losses to the welding machine itself were determined by measuring the friction torque on the sample surfaces ( M S) and in the machine bearings ( M M). It was found that the rotating parts of the welding machine can consume a significant fraction of the total energy. Specifically, the parasitic losses ranged from 28 to 80 pct of the total weld energy. The losses increased (and the corresponding IFW process efficiency decreased) as P increased (at constant I and E o), I decreased (at constant P and E o), and E o (or ω o) increased (at constant P and I). The results of this work thus provide guidelines for selecting process parameters which minimize energy losses and increase process efficiency during IFW.

  6. [Grades evaluation of Scutellariae Radix slices based on quality constant].

    PubMed

    Deng, Zhe; Zhang, Jun; Jiao, Meng-Jiao; Zhong, Wen; Cui, Wen-Jin; Cheng, Jin-Tang; Chen, Sha; Wang, Yue-Sheng; Liu, An

    2017-05-01

    By measuring the morphological indexes and the marker components content of 22 batches of Scutellariae Radix slices as well as calculating the quality constant, this research was aimed to establish a new method of evaluating the specifications and grades of Scutellariae Radix slices. The quality constants of these samples were in the range of 0.04-0.49, which can be divided into several grades based on the real requirement. If they were divided into three grades, the quality constant was ≥0.39 for the first grade, <0.39 but ≥0.24 for the second grade, and <0.24 for the third grade. This work indicated that the quality constants characterizing both apparent parameters and intrinsic quality can be used as a comprehensive evaluation index to classify the grades of traditional Chinese medicine quantitatively, clearly and objectively. The research results in this paper would provide new ideas and references for evaluating the specifications and grades of traditional Chinese medicines. Copyright© by the Chinese Pharmaceutical Association.

  7. Fast backprojection-based reconstruction of spectral-spatial EPR images from projections with the constant sweep of a magnetic field.

    PubMed

    Komarov, Denis A; Hirata, Hiroshi

    2017-08-01

    In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions

    PubMed Central

    Mountain, Raymond D.; Harvey, Allan H.

    2015-01-01

    Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H2O–CO2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range. PMID:26664009

  9. Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions.

    PubMed

    Mountain, Raymond D; Harvey, Allan H

    2015-10-01

    Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H 2 O-CO 2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range.

  10. Jitter Controller Software

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlensinger, Adam

    2011-01-01

    Sinusoidal jitter is produced by simply modulating a clock frequency sinusoidally with a given frequency and amplitude. But this can be expressed as phase jitter, frequency jitter, or cycle-to-cycle jitter, rms or peak, absolute units, or normalized to the base clock frequency. Jitter using other waveforms requires calculating and downloading these waveforms to an arbitrary waveform generator, and helping the user manage relationships among phase jitter crest factor, frequency jitter crest factor, and cycle-to-cycle jitter (CCJ) crest factor. Software was developed for managing these relationships, automatically configuring the generator, and saving test results documentation. Tighter management of clock jitter and jitter sensitivity is required by new codes that further extend the already high performance of space communication links, completely correcting symbol error rates higher than 10 percent, and therefore typically requiring demodulation and symbol synchronization hardware to operating at signal-to-noise ratios of less than one. To accomplish this, greater demands are also made on transmitter performance, and measurement techniques are needed to confirm performance. It was discovered early that sinusoidal jitter can be stepped on a grid such that one can connect points by constant phase jitter, constant frequency jitter, or constant cycle-cycle jitter. The tool automates adherence to a grid while also allowing adjustments off-grid. Also, the jitter can be set by the user on any dimension and the others are calculated. The calculations are all recorded, allowing the data to be rapidly plotted or re-plotted against different interpretations just by changing pointers to columns. A key advantage is taking data on a carefully controlled grid, which allowed a single data set to be post-analyzed many different ways. Another innovation was building a software tool to provide very tight coupling between the generator and the recorded data product, and the operator's worksheet. Together, these allowed the operator to sweep the jitter stimulus quickly along any of three dimensions and focus on the response of the system under test (response was jitter transfer ratio, or performance degradation to the symbol or codeword error rate). Additionally, managing multi-tone and noise waveforms automated a tedious manual process, and provided almost instantaneous decision- making control over test flow. The code was written in LabVIEW, and calls Agilent instrument drivers to write to the generator hardware.

  11. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases*

    PubMed Central

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-01-01

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (PIntr) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a 14C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site −3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. PMID:26468285

  12. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases.

    PubMed

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-11-27

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (P(Intr)) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a (14)C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site -3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  13. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it

    2011-12-15

    Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less

  14. Electric Double-Layer Interaction between Dissimilar Charge-Conserved Conducting Plates.

    PubMed

    Chan, Derek Y C

    2015-09-15

    Small metallic particles used in forming nanostructured to impart novel optical, catalytic, or tribo-rheological can be modeled as conducting particles with equipotential surfaces that carry a net surface charge. The value of the surface potential will vary with the separation between interacting particles, and in the absence of charge-transfer or electrochemical reactions across the particle surface, the total charge of each particle must also remain constant. These two physical conditions require the electrostatic boundary condition for metallic nanoparticles to satisfy an equipotential whole-of-particle charge conservation constraint that has not been studied previously. This constraint gives rise to a global charge conserved constant potential boundary condition that results in multibody effects in the electric double-layer interaction that are either absent or are very small in the familiar constant potential or constant charge or surface electrochemical equilibrium condition.

  15. Dynamical approach to the cosmological constant.

    PubMed

    Mukohyama, Shinji; Randall, Lisa

    2004-05-28

    We consider a dynamical approach to the cosmological constant. There is a scalar field with a potential whose minimum occurs at a generic, but negative, value for the vacuum energy, and it has a nonstandard kinetic term whose coefficient diverges at zero curvature as well as the standard kinetic term. Because of the divergent coefficient of the kinetic term, the lowest energy state is never achieved. Instead, the cosmological constant automatically stalls at or near zero. The merit of this model is that it is stable under radiative corrections and leads to stable dynamics, despite the singular kinetic term. The model is not complete, however, in that some reheating is required. Nonetheless, our approach can at the very least reduce fine-tuning by 60 orders of magnitude or provide a new mechanism for sampling possible cosmological constants and implementing the anthropic principle.

  16. Economic comparison of conventional maintenance and electrochemical oxidation to warrant water safety in dental unit water lines.

    PubMed

    Fischer, Sebastian; Meyer, Georg; Kramer, Axel

    2012-01-01

    In preparation for implementation of a central water processing system at a dental department, we analyzed the costs of conventional decentralized disinfection of dental units against a central water treatment concept based on electrochemical disinfection. The cost evaluation included only the costs of annually required antimicrobial consumables and additional water usage of a decentralize conventional maintenance system for dental water lines build in the respective dental units and the central electrochemical water disinfection system, BLUE SAFETY™ Technologies. In total, analysis of costs of 6 dental departments reviled additional annual costs for hygienic preventive measures of € 4,448.37. For the BLUE SAFETY™ Technology, the additional annual total agent consumption costs were € 2.18, accounting for approximately 0.05% of the annual total agent consumption costs of the conventional maintenance system. For both water processing concepts, the additional costs for energy could not be calculated, since the required data was not obtainable from the manufacturers. For both concepts, the investment and maintenance costs were not calculated due to lack of manufacturer's data. Therefore, the results indicate the difference of costs for the required consumables only. Aside of the significantly lower annual costs for required consumables and disinfectants; a second advantage for the BLUE SAFETY™ Technology is its constant and automatic operation, which does not require additional staff resources. This not only safety human resources, but add additionally to cost saving. Since the antimicrobial disinfection capacity of the BLUE SAFETY™ was demonstrated previously and is well known, this technology, which is comparable or even superior in its non-corrosive effect, may be regarded as method of choice for continuous disinfection and prevention of biofilm formation in dental units' water lines.

  17. Economic comparison of conventional maintenance and electrochemical oxidation to warrant water safety in dental unit water lines

    PubMed Central

    Fischer, Sebastian; Meyer, Georg; Kramer, Axel

    2012-01-01

    Background: In preparation for implementation of a central water processing system at a dental department, we analyzed the costs of conventional decentralized disinfection of dental units against a central water treatment concept based on electrochemical disinfection. Methods: The cost evaluation included only the costs of annually required antimicrobial consumables and additional water usage of a decentralize conventional maintenance system for dental water lines build in the respective dental units and the central electrochemical water disinfection system, BLUE SAFETY™ Technologies. Results: In total, analysis of costs of 6 dental departments reviled additional annual costs for hygienic preventive measures of € 4,448.37. For the BLUE SAFETY™ Technology, the additional annual total agent consumption costs were € 2.18, accounting for approximately 0.05% of the annual total agent consumption costs of the conventional maintenance system. For both water processing concepts, the additional costs for energy could not be calculated, since the required data was not obtainable from the manufacturers. Discussion: For both concepts, the investment and maintenance costs were not calculated due to lack of manufacturer's data. Therefore, the results indicate the difference of costs for the required consumables only. Aside of the significantly lower annual costs for required consumables and disinfectants; a second advantage for the BLUE SAFETY™ Technology is its constant and automatic operation, which does not require additional staff resources. This not only safety human resources, but add additionally to cost saving. Conclusion: Since the antimicrobial disinfection capacity of the BLUE SAFETY™ was demonstrated previously and is well known, this technology, which is comparable or even superior in its non-corrosive effect, may be regarded as method of choice for continuous disinfection and prevention of biofilm formation in dental units’ water lines. PMID:22558042

  18. Current Requirements of the Society to the Professional Training of Specialists in Information Technology Industry in Japan

    ERIC Educational Resources Information Center

    Pododimenko, Inna

    2014-01-01

    The problem of professional training of skilled human personnel in the industry of information communication technology, the urgency of which is recognized at the state level of Ukraine and the world, has been considered. It has been traced that constantly growing requirements of the labour market, swift scientific progress require the use of…

  19. Nonlinear Localized Dissipative Structures for Long-Time Solution of Wave Equation

    DTIC Science & Technology

    2009-07-01

    are described in this chapter. These details are required to compute interference. WC can be used to generate constant arrival time ( Eikonal phase...complicated using Eikonal schemes. Some recent developments in Eikonal methods [2] can treat multiple arrival times but, these methods require extra

  20. Potential of solar-simulator-pumped alexandrite lasers

    NASA Technical Reports Server (NTRS)

    Deyoung, Russell J.

    1990-01-01

    An attempt was made to pump an alexandrite laser rod using a Tamarak solar simulator and also a tungsten-halogen lamp. A very low optical laser cavity was used to achieve the threshold minimum pumping-power requirement. Lasing was not achieved. The laser threshold optical-power requirement was calculated to be approximately 626 W/sq cm for a gain length of 7.6 cm, whereas the Tamarak simulator produces 1150 W/sq cm over a gain length of 3.3 cm, which is less than the 1442 W/sq cm required to reach laser threshold. The rod was optically pulsed with 200 msec pulses, which allowed the alexandrite rod to operate at near room temperature. The optical intensity-gain-length product to achieve laser threshold should be approximately 35,244 solar constants-cm. In the present setup, this product was 28,111 solar constants-cm.

Top