Estimates of the maximum time required to originate life
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.; Fogleman, Guy
1989-01-01
Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.
42 CFR 457.560 - Cumulative cost-sharing maximum.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Cumulative cost-sharing maximum. 457.560 Section... State Plan Requirements: Enrollee Financial Responsibilities § 457.560 Cumulative cost-sharing maximum... writing and orally if appropriate of their individual cumulative cost-sharing maximum amount at the time...
Fitzgerald, Paul J
2014-07-01
It is of high clinical interest to better understand the timecourse through which psychiatric drugs produce their beneficial effects. While a rough estimate of the time lag between initiating monoaminergic antidepressant therapy and the onset of therapeutic effect in depressed subjects is two weeks, much less is known about when these drugs reach maximum effect. This paper briefly examines studies that directly address this question through long-term antidepressant administration to humans, while also putting forth a simple theoretical approach for estimating the time required for monoaminergic antidepressants to reach maximum therapeutic effect in humans. The theory invokes a comparison between speed of antidepressant drug response in humans and in rodents, focusing on the apparently greater speed in rodents. The principal argument is one of proportions, comparing earliest effects of these drugs in rodents and humans, versus their time to reach maximum effect in these organisms. If the proportionality hypothesis is even coarsely accurate, then applying these values or to some degree their ranges to the hypothesis, may suggest that monoaminergic antidepressants require a number of years to reach maximum effect in humans, at least in some individuals.
50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.
Code of Federal Regulations, 2010 CFR
2010-10-01
... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2012 CFR
2012-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2014 CFR
2014-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
20 CFR 10.806 - How are the maximum fees defined?
Code of Federal Regulations, 2013 CFR
2013-04-01
... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... trim tab relative to the elevator exceeds 1.0 degree (this is equal to a maximum displacement of 0.070... are no to a maximum displacement of longer required after that time. 0.070'' at the trailing edge...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
A digital indicator for maximum windspeeds.
William B. Fowler
1969-01-01
A simple device for indicating maximum windspeed during a time interval is described. Use of a unijunction transistor, for voltage sensing, results in a stable comparison circuit and also reduces overall component requirements. Measurement is presented digitally in 1-mile-per-hour increments over the range of 0-51 m.p.h.
75 FR 1076 - Outer Continental Shelf Civil Penalties
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-08
...The Outer Continental Shelf Lands Act requires the MMS to review the maximum daily civil penalty assessment for violations of regulations governing oil and gas operations in the Outer Continental Shelf at least once every 3 years. This review ensures that the maximum penalty assessment reflects any increases in the Consumer Price Index as prepared by the Bureau of Labor Statistics, U.S. Department of Labor. After conducting the required review in August 2009, the MMS determined that no adjustment is necessary at this time.
21 CFR 1020.32 - Fluoroscopic equipment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... information as required in § 1020.30(h). (h) Fluoroscopic irradiation time, display, and signal. (1)(i... irradiation time of the fluoroscopic tube. The maximum cumulative time of the timing device shall not exceed 5... preset cumulative irradiation-time. Such signal shall continue to sound while x-rays are produced until...
21 CFR 1020.32 - Fluoroscopic equipment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... information as required in § 1020.30(h). (h) Fluoroscopic irradiation time, display, and signal. (1)(i... irradiation time of the fluoroscopic tube. The maximum cumulative time of the timing device shall not exceed 5... preset cumulative irradiation-time. Such signal shall continue to sound while x-rays are produced until...
21 CFR 1020.32 - Fluoroscopic equipment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... information as required in § 1020.30(h). (h) Fluoroscopic irradiation time, display, and signal. (1)(i... irradiation time of the fluoroscopic tube. The maximum cumulative time of the timing device shall not exceed 5... preset cumulative irradiation-time. Such signal shall continue to sound while x-rays are produced until...
21 CFR 1020.32 - Fluoroscopic equipment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... information as required in § 1020.30(h). (h) Fluoroscopic irradiation time, display, and signal. (1)(i... irradiation time of the fluoroscopic tube. The maximum cumulative time of the timing device shall not exceed 5... preset cumulative irradiation-time. Such signal shall continue to sound while x-rays are produced until...
18 CFR 12.38 - Time for inspections and reports.
Code of Federal Regulations, 2011 CFR
2011-04-01
... reaches its normal maximum surface elevation, whichever occurs first. (3) For any development not set... information and analyses required by § 12.37(b). (c) Extension of time. For good cause shown, the Regional...
Code of Federal Regulations, 2011 CFR
2011-10-01
... elastic expansion was determined at the time of the last test or retest by the water jacket method. (3) Either the average wall stress or the maximum wall stress does not exceed the wall stress limitation shown in the following table: Type of steel Average wall stress limitation Maximum wall stress...
24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.
Code of Federal Regulations, 2014 CFR
2014-04-01
... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...
24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.
Code of Federal Regulations, 2010 CFR
2010-04-01
... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...
24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.
Code of Federal Regulations, 2011 CFR
2011-04-01
... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...
24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.
Code of Federal Regulations, 2012 CFR
2012-04-01
... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...
24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.
Code of Federal Regulations, 2013 CFR
2013-04-01
... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...
Design study of steel V-Belt CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Swain, J. C.; Klausing, T. A.; Wilcox, J. P.
1980-01-01
A continuously variable transmission (CVT) design layout was completed. The intended application was for coupling the flywheel to the driveline of a flywheel battery hybrid electric vehicle. The requirements were that the CVT accommodate flywheel speeds from 14,000 to 28,000 rpm and driveline speeds of 850 to 5000 rpm without slipping. Below 850 rpm a slipping clutch was used between the CVT and the driveline. The CVT was required to accommodate 330 ft-lb maximum torque and 100 hp maximum transient. The weighted average power was 22 hp, the maximum allowable full range shift time was 2 seconds and the required lift was 2600 hours. The resulting design utilized two steel V-belts in series to accommodate the required wide speed ratio. The size of the CVT, including the slipping clutch, was 20.6 inches long, 9.8 inches high and 13.8 inches wide. The estimated weight was 155 lb. An overall potential efficiency of 95 percent was projected for the average power condition.
5 CFR 338.601 - Prohibition of maximum-age requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Prohibition of maximum-age requirements... REGULATIONS QUALIFICATION REQUIREMENTS (GENERAL) Age Requirements § 338.601 Prohibition of maximum-age requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations...
5 CFR 338.601 - Prohibition of maximum-age requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Prohibition of maximum-age requirements... REGULATIONS QUALIFICATION REQUIREMENTS (GENERAL) Age Requirements § 338.601 Prohibition of maximum-age requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations...
5 CFR 338.601 - Prohibition of maximum-age requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Prohibition of maximum-age requirements... REGULATIONS QUALIFICATION REQUIREMENTS (GENERAL) Age Requirements § 338.601 Prohibition of maximum-age requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations...
The role of wellbore remediation on the evolution of groundwater quality from CO₂ and brine leakage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansoor, Kayyum; Carroll, Susan A.; Sun, Yunwei
Long-term storage of CO₂ in underground reservoirs requires a careful assessment to evaluate risk to groundwater sources. The focus of this study is to assess time-frames required to restore water quality to pre-injection levels based on output from complex reactive transport simulations that exhibit plume retraction within a 200-year simulation period. We examined the relationship between plume volume, cumulative injected CO₂ mass, and permeability. The role of mitigation was assessed by projecting falloffs in plume volumes from their maximum peak levels with a Gaussian function to estimate plume recovery times to reach post-injection groundwater compositions. The results show a strongmore » correlation between cumulative injected CO₂ mass and maximum plume pH volumes and a positive correlation between CO₂ flux, cumulative injected CO₂, and plume recovery times, with secondary dependence on permeability.« less
The role of wellbore remediation on the evolution of groundwater quality from CO₂ and brine leakage
Mansoor, Kayyum; Carroll, Susan A.; Sun, Yunwei
2014-12-31
Long-term storage of CO₂ in underground reservoirs requires a careful assessment to evaluate risk to groundwater sources. The focus of this study is to assess time-frames required to restore water quality to pre-injection levels based on output from complex reactive transport simulations that exhibit plume retraction within a 200-year simulation period. We examined the relationship between plume volume, cumulative injected CO₂ mass, and permeability. The role of mitigation was assessed by projecting falloffs in plume volumes from their maximum peak levels with a Gaussian function to estimate plume recovery times to reach post-injection groundwater compositions. The results show a strongmore » correlation between cumulative injected CO₂ mass and maximum plume pH volumes and a positive correlation between CO₂ flux, cumulative injected CO₂, and plume recovery times, with secondary dependence on permeability.« less
Mohta, Medha; Agarwal, Deepti; Sethi, AK
2011-01-01
Needle-through-needle combined spinal–epidural (CSE) may cause significant delay in patient positioning resulting in settling down of spinal anaesthetic and unacceptably low block level. Bilateral hip flexion has been shown to extend the spinal block by flattening lumbar lordosis. However, patients with lower limb fractures cannot flex their injured limb. This study was conducted to find out if unilateral hip flexion could extend the level of spinal anaesthesia following a prolonged CSE technique. Fifty American Society of Anesthesiologists (ASA) I/II males with unilateral femur fracture were randomly allocated to Control or Flexion groups. Needle-through-needle CSE was performed in the sitting position at L2-3 interspace and 2.6 ml 0.5% hyperbaric bupivacaine injected intrathecally. Patients were made supine 4 min after the spinal injection or later if epidural placement took longer. The Control group patients (n=25) lay supine with legs straight, whereas the Flexion group patients (n=25) had their uninjured hip and knee flexed for 5 min. Levels of sensory and motor blocks and time to epidural drug requirement were recorded. There was no significant difference in sensory levels at different time-points; maximum sensory and motor blocks; times to achieve maximum blocks; and time to epidural drug requirement in two groups. However, four patients in the Control group in contrast to none in the Flexion group required epidural drug before start of surgery. Moreover, in the Control group four patients took longer than 30 min to achieve maximum sensory block. To conclude, unilateral hip flexion did not extend the spinal anaesthetic level; however, further studies are required to explore the potential benefits of this technique. PMID:21808396
Effects of accuracy constraints on reach-to-grasp movements in cerebellar patients.
Rand, M K; Shimansky, Y; Stelmach, G E; Bracha, V; Bloedel, J R
2000-11-01
Reach-to-grasp movements of patients with pathology restricted to the cerebellum were compared with those of normal controls. Two types of paradigms with different accuracy constraints were used to examine whether cerebellar impairment disrupts the stereotypic relationship between arm transport and grip aperture and whether the variability of this relationship is altered when greater accuracy is required. The movements were made to either a vertical dowel or to a cross bar of a small cross. All subjects were asked to reach for either target at a fast but comfortable speed, grasp the object between the index finger and thumb, and lift it a short distance off the table. In terms of the relationship between arm transport and grip aperture, the control subjects showed a high consistency in grip aperture and wrist velocity profiles from trial to trial for movements to both the dowel and the cross. The relationship between the maximum velocity of the wrist and the time at which grip aperture was maximal during the reach was highly consistent throughout the experiment. In contrast, the time of maximum grip aperture and maximum wrist velocity of the cerebellar patients was quite variable from trial to trial, and the relationship of these measurements also varied considerably. These abnormalities were present regardless of the accuracy requirement. In addition, the cerebellar patients required a significantly longer time to grasp and lift the objects than the control subjects. Furthermore, the patients exhibited a greater grip aperture during reach than the controls. These data indicate that the cerebellum contributes substantially to the coordination of movements required to perform reach-to-grasp movements. Specifically, the cerebellum is critical for executing this behavior with a consistent, well-timed relationship between the transport and grasp components. This contribution is apparent even when accuracy demands are minimal.
Times for interplanetary trips
NASA Technical Reports Server (NTRS)
Jones, R. T.
1976-01-01
The times required to travel to the various planets at an acceleration of one g are calculated. Surrounding gravitational fields are neglected except for a relatively short distance near take-off or landing. The orbit consists of an essentially straight line with the thrust directed toward the destination up to the halfway point, but in the opposite direction for the remainder so that the velocity is zero on arrival. A table lists the approximate times required, and also the maximum velocities acquired in light units v/c for the various planets.
Cognitive performance in women with fibromyalgia: A case-control study.
Pérez de Heredia-Torres, Marta; Huertas-Hoyas, Elisabet; Máximo-Bocanegra, Nuria; Palacios-Ceña, Domingo; Fernández-De-Las-Peñas, César
2016-10-01
This study aimed to evaluate the differences in cognitive skills between women with fibromyalgia and healthy women, and the correlations between functional independence and cognitive limitations. A cross-sectional study was performed. Twenty women with fibromyalgia and 20 matched controls participated. Outcomes included the Numerical Pain Rating Scale, the Functional Independence Measure, the Fibromyalgia Impact Questionnaire and Gradior © software. The Student's t-test and the Spearman's rho test were applied to the data. Women affected required a greater mean time (P < 0.020) and maximum time (P < 0.015) during the attention test than the healthy controls. In the memory test they displayed greater execution errors (P < 0.001), minimal time (P < 0.001) and mean time (P < 0.001) whereas, in the perception tests, they displayed a greater mean time (P < 0.009) and maximum time (P < 0.048). Correlations were found between the domains of the functional independence measure and the cognitive abilities assessed. Women with fibromyalgia exhibited a decreased cognitive ability compared to healthy controls, which negatively affected the performance of daily activities, such as upper limb dressing, feeding and personal hygiene. Patients required more time to perform activities requiring both attention and perception, decreasing their functional independence. Also, they displayed greater errors when performing activities requiring the use of memory. Occupational therapists treating women with fibromyalgia should consider the negative impact of possible cognitive deficits on the performance of daily activities and offer targeted support strategies. © 2016 Occupational Therapy Australia.
29 CFR 1926.451 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... least 4 times the maximum intended load applied or transmitted to it. (2) Direct connections to roofs... resisting at least 4 times the tipping moment imposed by the scaffold operating at the rated load of the hoist, or 1.5 (minimum) times the tipping moment imposed by the scaffold operating at the stall load of...
29 CFR 1926.451 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... least 4 times the maximum intended load applied or transmitted to it. (2) Direct connections to roofs... resisting at least 4 times the tipping moment imposed by the scaffold operating at the rated load of the hoist, or 1.5 (minimum) times the tipping moment imposed by the scaffold operating at the stall load of...
29 CFR 1926.451 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... least 4 times the maximum intended load applied or transmitted to it. (2) Direct connections to roofs... resisting at least 4 times the tipping moment imposed by the scaffold operating at the rated load of the hoist, or 1.5 (minimum) times the tipping moment imposed by the scaffold operating at the stall load of...
29 CFR 1926.451 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... least 4 times the maximum intended load applied or transmitted to it. (2) Direct connections to roofs... resisting at least 4 times the tipping moment imposed by the scaffold operating at the rated load of the hoist, or 1.5 (minimum) times the tipping moment imposed by the scaffold operating at the stall load of...
29 CFR 1926.451 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... least 4 times the maximum intended load applied or transmitted to it. (2) Direct connections to roofs... resisting at least 4 times the tipping moment imposed by the scaffold operating at the rated load of the hoist, or 1.5 (minimum) times the tipping moment imposed by the scaffold operating at the stall load of...
Deceleration-stats save much time during phototrophic culture optimization.
Hoekema, Sebastiaan; Rinzema, Arjen; Tramper, Johannes; Wijffels, René H; Janssen, Marcel
2014-04-01
In case of phototrophic cultures, photobioreactor costs contribute significantly to the total operating costs. Therefore one of the most important parameters to be determined is the maximum biomass production rate, if biomass or a biomass associated product is the desired product. This is traditionally determined in time consuming series of chemostat cultivations. The goal of this work is to assess the experimental time that can be saved by applying the deceleration stat (D-stat) technique to assess the maximum biomass production rate of a phototrophic cultivation system, instead of a series of chemostat cultures. A mathematical model developed by Geider and co-workers was adapted in order to describe the rate of photosynthesis as a function of the local light intensity. This is essential for the accurate description of biomass productivity in phototrophic cultures. The presented simulations demonstrate that D-stat experiments executed in the absence of pseudo steady-state (i.e., the arbitrary situation that the observed specific growth rate deviates <5% from the dilution rate) can still be used to accurately determine the maximum biomass productivity of the system. Moreover, this approach saves up to 94% of the time required to perform a series of chemostat experiments that has the same accuracy. In case more information on the properties of the system is required, the reduction in experimental time is reduced but still significant. © 2013 Wiley Periodicals, Inc.
22 CFR 11.7 - Termination of eligibility.
Code of Federal Regulations, 2014 CFR
2014-04-01
...: Provided, however, That reasonable time spent in civilian Government service abroad (to a maximum of 2 years such service), including service as a Peace Corps volunteer, in required active military service...
22 CFR 11.7 - Termination of eligibility.
Code of Federal Regulations, 2011 CFR
2011-04-01
...: Provided, however, That reasonable time spent in civilian Government service abroad (to a maximum of 2 years such service), including service as a Peace Corps volunteer, in required active military service...
22 CFR 11.7 - Termination of eligibility.
Code of Federal Regulations, 2013 CFR
2013-04-01
...: Provided, however, That reasonable time spent in civilian Government service abroad (to a maximum of 2 years such service), including service as a Peace Corps volunteer, in required active military service...
22 CFR 11.7 - Termination of eligibility.
Code of Federal Regulations, 2012 CFR
2012-04-01
...: Provided, however, That reasonable time spent in civilian Government service abroad (to a maximum of 2 years such service), including service as a Peace Corps volunteer, in required active military service...
22 CFR 11.7 - Termination of eligibility.
Code of Federal Regulations, 2010 CFR
2010-04-01
...: Provided, however, That reasonable time spent in civilian Government service abroad (to a maximum of 2 years such service), including service as a Peace Corps volunteer, in required active military service...
DOT National Transportation Integrated Search
2013-08-01
Pilots thatuse an impairing medication to treat a medicalcondition are required to wait an appropriate amount of time after completing the treatment before returning to duty.However, toxicology findings for pilots involved in fatal aviation accidents...
Investment opportunity : the FPL low-cost solar dry kiln
George B. Harpole
1988-01-01
Two equations are presented that may be used to estimate a maximum investment limit and working capital requirements for the FPL low-cost solar dry kiln systems. The equations require data for drying cycle time, green lumber cost, and kiln-dried lumber costs. Results are intended to provide a preliminary estimate.
7 CFR 1738.30 - Rural broadband access loans and loan guarantees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... institutional investor authorized by law to loan money, hereafter referred to as “lender”. At the time of... of the service area. The maximum population density requirement will be published by RUS in the... making, loan servicing, and other requirements of the jurisdiction in which the lender makes loans...
Code of Federal Regulations, 2010 CFR
2010-10-01
... components thereof shall be tested as required by this section. (b) Accumulators constructed as pressure... associated equipment components, including hydraulic steering gear, in lieu of being tested at the time of installation, may be shop tested by the manufacturer to 11/2 times the maximum allowable pressure of the system...
Code of Federal Regulations, 2012 CFR
2012-10-01
... components thereof shall be tested as required by this section. (b) Accumulators constructed as pressure... associated equipment components, including hydraulic steering gear, in lieu of being tested at the time of installation, may be shop tested by the manufacturer to 11/2 times the maximum allowable pressure of the system...
Code of Federal Regulations, 2011 CFR
2011-10-01
... components thereof shall be tested as required by this section. (b) Accumulators constructed as pressure... associated equipment components, including hydraulic steering gear, in lieu of being tested at the time of installation, may be shop tested by the manufacturer to 11/2 times the maximum allowable pressure of the system...
Code of Federal Regulations, 2014 CFR
2014-10-01
... components thereof shall be tested as required by this section. (b) Accumulators constructed as pressure... associated equipment components, including hydraulic steering gear, in lieu of being tested at the time of installation, may be shop tested by the manufacturer to 11/2 times the maximum allowable pressure of the system...
Code of Federal Regulations, 2013 CFR
2013-10-01
... components thereof shall be tested as required by this section. (b) Accumulators constructed as pressure... associated equipment components, including hydraulic steering gear, in lieu of being tested at the time of installation, may be shop tested by the manufacturer to 11/2 times the maximum allowable pressure of the system...
Wu, Rudolf S S; Siu, William H L; Shin, Paul K S
2005-01-01
A wide range of biological responses have been used to identify exposure to contaminants, monitor spatial and temporal changes in contamination levels, provide early warning of environmental deterioration and indicate occurrences of adverse ecological consequences. To be useful in environmental monitoring, a biological response must reflect the environmental stress over time in a quantitative way. We here argue that the time required for initial induction, maximum induction, adaptation and recovery of these stress responses must first be fully understood and considered before they can be used in environmental monitoring, or else erroneous conclusions (both false-negative and false-positive) may be drawn when interpreting results. In this study, data on initial induction, maximum induction, adaptation and recovery of stress responses at various biological hierarchies (i.e., molecular, biochemical, physiological, behavioral, cytological, population and community responses) upon exposure to environmentally relevant levels of contaminants (i.e., metals, oil, polycyclic aromatic hydrocarbons (PAHs), organochlorines, organophosphates, endocrine disruptors) were extracted from 922 papers in the biomarker literature and analyzed. Statistical analyses showed that: (a) many stress responses may decline with time after induction (i.e., adaptation), even if the level of stress remains constant; (b) times for maximum induction and recovery of biochemical responses are positively related; (c) there is no evidence to support the general belief that time for induction of responses at a lower biological hierarchy (i.e., molecular responses and biochemical responses) is shorter than that at higher hierarchy (i.e., physiological, cytological and behavioral responses), although longer recovery time is found for population and community responses; (d) there are significant differences in times required for induction and adaptation of biological responses caused by different types of contaminants; (e) times required for initial and maximum induction of physiological responses in fish are significantly longer than those in crustaceans; and (f) there is a paucity of data on adaptation and recovery of responses, especially those at population and community levels. The above analyses highlight: (1) the limitations and possible erroneous conclusions in the present use of biomarkers in biomonitoring programs, (2) the importance of understanding the details of temporal changes of biological responses before employing them in environmental management, and (3) the suitability of using specific animal groups as bioindicator species.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
Direct measurement of a patient's entrance skin dose during pediatric cardiac catheterization
Sun, Lue; Mizuno, Yusuke; Iwamoto, Mari; Goto, Takahisa; Koguchi, Yasuhiro; Miyamoto, Yuka; Tsuboi, Koji; Chida, Koichi; Moritake, Takashi
2014-01-01
Children with complex congenital heart diseases often require repeated cardiac catheterization; however, children are more radiosensitive than adults. Therefore, radiation-induced carcinogenesis is an important consideration for children who undergo those procedures. We measured entrance skin doses (ESDs) using radio-photoluminescence dosimeter (RPLD) chips during cardiac catheterization for 15 pediatric patients (median age, 1.92 years; males, n = 9; females, n = 6) with cardiac diseases. Four RPLD chips were placed on the patient's posterior and right side of the chest. Correlations between maximum ESD and dose–area products (DAP), total number of frames, total fluoroscopic time, number of cine runs, cumulative dose at the interventional reference point (IRP), body weight, chest thickness, and height were analyzed. The maximum ESD was 80 ± 59 (mean ± standard deviation) mGy. Maximum ESD closely correlated with both DAP (r = 0.78) and cumulative dose at the IRP (r = 0.82). Maximum ESD for coiling and ballooning tended to be higher than that for ablation, balloon atrial septostomy, and diagnostic procedures. In conclusion, we directly measured ESD using RPLD chips and found that maximum ESD could be estimated in real-time using angiographic parameters, such as DAP and cumulative dose at the IRP. Children requiring repeated catheterizations would be exposed to high radiation levels throughout their lives, although treatment influences radiation dose. Therefore, the radiation dose associated with individual cardiac catheterizations should be analyzed, and the effects of radiation throughout the lives of such patients should be followed. PMID:24968708
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-29
... maximum time interval between any engine run-ups from idle and the minimum ambient temperature associated with that run-up interval. This limitation is necessary because we do not currently have any specific requirements for run-up procedures for engine ground operation in icing conditions. The engine run-up procedure...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Requirements for Licensed Launch, Including Suborbital Launch I. General Information A. Mission description. 1.... Orbit altitudes (apogee and perigee). 2. Flight sequence. 3. Staging events and the time for each event... shall cover the range of launch trajectories, inclinations and orbits for which authorization is sought...
14 CFR 61.163 - Aeronautical experience: Powered-lift category rating.
Code of Federal Regulations, 2013 CFR
2013-01-01
... time in a flight simulator or flight training device. (ii) A maximum of 50 hours of training in a flight simulator or flight training device may be credited toward the instrument flight time requirements... training center certificated under part 142 of this chapter. (iii) Training in a flight simulator or flight...
14 CFR 61.163 - Aeronautical experience: Powered-lift category rating.
Code of Federal Regulations, 2014 CFR
2014-01-01
... time in a flight simulator or flight training device. (ii) A maximum of 50 hours of training in a flight simulator or flight training device may be credited toward the instrument flight time requirements... training center certificated under part 142 of this chapter. (iii) Training in a flight simulator or flight...
14 CFR 61.163 - Aeronautical experience: Powered-lift category rating.
Code of Federal Regulations, 2012 CFR
2012-01-01
... time in a flight simulator or flight training device. (ii) A maximum of 50 hours of training in a flight simulator or flight training device may be credited toward the instrument flight time requirements... training center certificated under part 142 of this chapter. (iii) Training in a flight simulator or flight...
Abort Options for Human Missions to Earth-Moon Halo Orbits
NASA Technical Reports Server (NTRS)
Jesick, Mark C.
2013-01-01
Abort trajectories are optimized for human halo orbit missions about the translunar libration point (L2), with an emphasis on the use of free return trajectories. Optimal transfers from outbound free returns to L2 halo orbits are numerically optimized in the four-body ephemeris model. Circumlunar free returns are used for direct transfers, and cislunar free returns are used in combination with lunar gravity assists to reduce propulsive requirements. Trends in orbit insertion cost and flight time are documented across the southern L2 halo family as a function of halo orbit position and free return flight time. It is determined that the maximum amplitude southern halo incurs the lowest orbit insertion cost for direct transfers but the maximum cost for lunar gravity assist transfers. The minimum amplitude halo is the most expensive destination for direct transfers but the least expensive for lunar gravity assist transfers. The on-orbit abort costs for three halos are computed as a function of abort time and return time. Finally, an architecture analysis is performed to determine launch and on-orbit vehicle requirements for halo orbit missions.
A time motion study in the immunization clinic of a tertiary care hospital of kolkata, west bengal.
Chattopadhyay, Amitabha; Ghosh, Ritu; Maji, Sucharita; Ray, Tapobroto Guha; Lahiri, Saibendu Kumar
2012-01-01
A time and motion study is used to determine the amount of time required for a specific activity, work function, or mechanical process. Few such studies have been reported in the outpatient department of institutions, and such studies based exclusively on immunization clinic of an institute is a rarity. This was an observational cross sectional study done in the immunization clinic of R.G. Kar Medical College, Kolkata, over a period of 1 month (September 2010). The study population included mother/caregivers attending the immunization clinics with their children. The total sample was 482. Pre-synchronized stopwatches were used to record service delivery time at the different activity points. Median time was the same for both initial registration table and nutrition and health education table (120 seconds), but the vaccination and post vaccination advice table took the highest percentage of overall time (46.3%). Maximum time spent on the vaccination and post vaccination advice table was on Monday (538.1 s) and nutritional assessment and health assessment table took maximum time on Friday (217.1 s). Time taken in the first half of immunization session was more in most of the tables. The goal for achieving universal immunization against vaccine-preventable diseases requires multifaceted collated response from many stakeholders. Efficient functioning of immunization clinics is therefore required to achieve the prescribed goals. This study aims to initiate an effort to study the utilization of time at a certain health care unit with the invitation of much more in depth analysis in future.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Development of an Onboard Strain Recorder
1990-01-01
Investigations ...................... .910 2-3 Strain Sensors of Previous Investigations ..................... 11 2-4 Signal Conditioning of Previous...the time the strain sensor is installed or calibrated. If a maximum stress or force is to be determined, careful structural analysis is required to...such as deckhouse edges have been instrumented as cracks appear. Extreme care concerning placement and orientation of sensor installation is required
40 CFR 86.000-9 - Emission standards for 2000 and later model year light-duty trucks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86... leanest air to fuel mixture required to obtain maximum torque (lean best torque), plus a tolerance of six... fuel ratio shall not be richer at any time than the leanest air to fuel mixture required to obtain...
40 CFR 86.000-9 - Emission standards for 2000 and later model year light-duty trucks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86... leanest air to fuel mixture required to obtain maximum torque (lean best torque), plus a tolerance of six... fuel ratio shall not be richer at any time than the leanest air to fuel mixture required to obtain...
Spectrum formation in superluminous supernovae (Type I)
NASA Astrophysics Data System (ADS)
Mazzali, P. A.; Sullivan, M.; Pian, E.; Greiner, J.; Kann, D. A.
2016-06-01
The near-maximum spectra of most superluminous supernovae (SLSNe) that are not dominated by interaction with a H-rich circum-stellar medium (SLSN-I) are characterized by a blue spectral peak and a series of absorption lines which have been identified as O II. SN 2011kl, associated with the ultra-long gamma-ray burst GRB111209A, also had a blue peak but a featureless optical/ultraviolet (UV) spectrum. Radiation transport methods are used to show that the spectra (not including SN 2007bi, which has a redder spectrum at peak, like ordinary SNe Ic) can be explained by a rather steep density distribution of the ejecta, whose composition appears to be typical of carbon-oxygen cores of massive stars which can have low metal content. If the photospheric velocity is ˜10 000-15 000 km s-1, several lines form in the UV. O II lines, however, arise from very highly excited lower levels, which require significant departures from local thermodynamic equilibrium to be populated. These SLSNe are not thought to be powered primarily by 56Ni decay. An appealing scenario is that they are energized by X-rays from the shock driven by a magnetar wind into the SN ejecta. The apparent lack of evolution of line velocity with time that characterizes SLSNe up to about maximum is another argument in favour of the magnetar scenario. The smooth UV continuum of SN 2011kl requires higher ejecta velocities (˜20 000 km s-1): line blanketing leads to an almost featureless spectrum. Helium is observed in some SLSNe after maximum. The high-ionization near-maximum implies that both He and H may be present but not observed at early times. The spectroscopic classification of SLSNe should probably reflect that of SNe Ib/c. Extensive time coverage is required for an accurate classification.
Preliminary studies of the effect of thinning techniques over muon production profiles
NASA Astrophysics Data System (ADS)
Tomishiyo, G.; Souza, V.
2017-06-01
In the context of air shower simulations, thinning techniques are employed to reduce computational time and storage requirements. These techniques are tailored to preserve locally mean quantities during shower development, such as the average number of particles in a given atmosphere layer, and to not induce systematic shifts in shower observables, such as the depth of shower maximum. In this work we investigate thinning effects on the determination of the depth in which the shower has the maximum muon production {X}\\max μ -{sim}. We show preliminary results in which the thinning factor and maximum thinning weight might influence the determination of {X}\\max μ -{sim}
78 FR 28729 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Boeing Company Model 757-200 and -200PF series airplanes. That AD currently requires modifying the... specifies a maximum compliance time limit that overrides the optional threshold formula results. This AD was... analytical loads that [[Page 28730
33 CFR 183.105 - Quantity of flotation required.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of the dead weight; and (3) A weight in pounds that, when submerged, equals 62.4 times the volume in... of this section, “dead weight” means the maximum weight capacity marked on the boat minus the persons...
33 CFR 183.105 - Quantity of flotation required.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of the dead weight; and (3) A weight in pounds that, when submerged, equals 62.4 times the volume in... of this section, “dead weight” means the maximum weight capacity marked on the boat minus the persons...
33 CFR 183.105 - Quantity of flotation required.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of the dead weight; and (3) A weight in pounds that, when submerged, equals 62.4 times the volume in... of this section, “dead weight” means the maximum weight capacity marked on the boat minus the persons...
33 CFR 183.105 - Quantity of flotation required.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of the dead weight; and (3) A weight in pounds that, when submerged, equals 62.4 times the volume in... of this section, “dead weight” means the maximum weight capacity marked on the boat minus the persons...
33 CFR 183.105 - Quantity of flotation required.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of the dead weight; and (3) A weight in pounds that, when submerged, equals 62.4 times the volume in... of this section, “dead weight” means the maximum weight capacity marked on the boat minus the persons...
22 CFR 11.1 - Junior Foreign Service officer career candidate appointments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the month in which the written examination was held. Time spent outside the United States and its... volunteer service, or required active regular or reserve military service (to a maximum of the limit of such...-order register 18 months after the date of placement on the rank-order register. Time spent in civilian...
22 CFR 11.1 - Junior Foreign Service officer career candidate appointments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... of the month in which the written examination was held. Time spent outside the United States and its... volunteer service, or required active regular or reserve military service (to a maximum of the limit of such...-order register 18 months after the date of placement on the rank-order register. Time spent in civilian...
22 CFR 11.1 - Junior Foreign Service officer career candidate appointments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the month in which the written examination was held. Time spent outside the United States and its... volunteer service, or required active regular or reserve military service (to a maximum of the limit of such...-order register 18 months after the date of placement on the rank-order register. Time spent in civilian...
22 CFR 11.1 - Junior Foreign Service officer career candidate appointments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of the month in which the written examination was held. Time spent outside the United States and its... volunteer service, or required active regular or reserve military service (to a maximum of the limit of such...-order register 18 months after the date of placement on the rank-order register. Time spent in civilian...
5 CFR 338.601 - Prohibition of maximum-age requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations for positions in the competitive service except as provided by: (a) Section 3307 of title 5, United States Code; or (b) Public Law 93-259 which authorizes OPM to establish a maximum-age requirement after...
5 CFR 338.601 - Prohibition of maximum-age requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations for positions in the competitive service except as provided by: (a) Section 3307 of title 5, United States Code; or (b) Public Law 93-259 which authorizes OPM to establish a maximum-age requirement after...
Assessing the Performance of LED-Based Flashlights Available in the Kenyan Off-Grid Lighting Market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tracy, Jennifer; Jacobson, Arne; Mills, Evan
Low cost rechargeable flashlights that use LED technology are increasingly available in African markets. While LED technology holds promise to provide affordable, high quality lighting services, the widespread dissemination of low quality products may make it difficult to realize this potential. This study includes performance results for three brands of commonly available LED flashlights that were purchased in Kenya in 2009. The performance of the flashlights was evaluated by testing five units for each of the three brands. The tests included measurements of battery capacity, time required to charge the battery, maximum illuminance at one meter, operation time and lux-hoursmore » from a fully charged battery, light distribution, and color rendering. All flashlights tested performed well below the manufacturers? rated specifications; the measured battery capacity was 30-50percent lower than the rated capacity and the time required to fully charge the battery was 6-25percent greater than the rated time requirement. Our analysis further shows that within each brand there is considerable variability in each performance indicator. The five samples within a single brand varied from each other by as much as 22percent for battery capacity measurements, 3.6percent for the number of hours required for a full charge, 23percent for maximum initial lux, 38percent for run time, 11percent for light distribution and by as much as 200percent for color rendering. Results obtained are useful for creating a framework for quality assurance of off-grid LED products and will be valuable for informing consumers, distributors and product manufacturers about product performance.« less
Lin, Shi-Ming; Lin, Chen-Chun; Chen, Wei-Ting; Chen, Yi-Chen; Hsu, Chao-Wei
2007-09-01
To compare the effectiveness of ablation techniques for hepatocellular carcinoma (HCC) with the use of four radiofrequency (RF) devices. One hundred patients with 133 HCC lesions no larger than 4 cm were treated with one of four RF devices: RF 2000 (maximum power, 100 W) and RF 3000 generators (maximum power, 200 W) with LeVeen expandable electrodes with a maximum dimension of 3.5 cm or 4 cm, internally cooled single electrode with a thermal dimension of 3 cm, and a RITA RF generator with expandable electrodes with a maximum dimension of 5 cm. Numbers of RF sessions needed per HCC to achieve complete necrosis were 1.4 +/- 0.5 with the RF 2000 device and greater than 1.1 +/- 0.3 with the other three devices (P < .05). The RF 2000 device required a more interactive algorithm than the RF 3000 device. Session times per patient were 31.7 minutes +/- 13.2 in the RF 2000 group and longer than 16.6 minutes +/- 7.5 in the RF 3000 group, 28.3 minutes +/- 12 in the RITA device group, and 27.1 minutes +/- 12 with the internally cooled electrode device (P < .005 for RF 2000 vs other devices and for RF 3000 vs RITA or internally cooled electrode device). Complete necrosis and local tumor progression rates at 2 years in the RF 2000, RF 3000, RITA, and internally cooled electrode device groups were 91.1%, 97.1%, 96.7%, and 96.8% and 12%, 8%, 8.2%, and 8.3%, respectively (P = .37). Ablation with the RF 3000 device required a shorter time than the other three devices and required a less interactive algorithm than the RF 2000 device. However, complete necrosis and local tumor progression rates were similar among devices.
Gessler, Tobias; Ghofrani, Hossein-Ardeschir; Held, Matthias; Klose, Hans; Leuchte, Hanno; Olschewski, Horst; Rosenkranz, Stephan; Fels, Lueder; Li, Na; Ren, Dawn; Kaiser, Andreas; Schultze-Mosgau, Marcus-Hillert; Müllinger, Bernhard; Rohde, Beate; Seeger, Werner
2017-01-01
The BREELIB nebulizer was developed for iloprost to reduce inhalation times for patients with pulmonary arterial hypertension (PAH). This multicenter, randomized, unblinded, four-part study compared inhalation time, pharmacokinetics, and acute tolerability of iloprost 5 µg at mouthpiece delivered via BREELIB versus the standard I-Neb nebulizer in 27 patients with PAH. The primary safety outcome was the proportion of patients with a maximum increase in heart rate (HR) ≥ 25% and/or a maximum decrease in systolic blood pressure ≥ 20% within 30 min after inhalation. Other safety outcomes included systolic, diastolic, and mean blood pressure, HR, oxygen saturation, and adverse events (AEs). Median inhalation times were considerably shorter with BREELIB versus I-Neb (2.6 versus 10.9 min; n = 24). Maximum iloprost plasma concentration and systemic exposure (area under the plasma concentration–time curve) were 77% and 42% higher, respectively, with BREELIB versus I-Neb. Five patients experienced a maximum systolic blood pressure decrease ≥ 20%, four with BREELIB (one mildly and transiently symptomatic), and one with I-Neb; none required medical intervention. AEs reported during the study were consistent with the known safety profile of iloprost. The BREELIB nebulizer offers reduced inhalation time, good tolerability, and may improve iloprost aerosol therapy convenience and thus compliance for patients with PAH. PMID:28597762
Electron Attenuation Measurement using Cosmic Ray Muons at the MicroBooNE LArTPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meddage, Varuna
2017-10-01
The MicroBooNE experiment at Fermilab uses liquid argon time projection chamber (LArTPC) technology to study neutrino interactions in argon. A fundamental requirement for LArTPCs is to achieve and maintain a low level of electronegative contaminants in the liquid to minimize the capture of drifting ionization electrons. The attenuation time for the drifting electrons should be long compared to the maximum drift time, so that the signals from particle tracks that generate ionization electrons with long drift paths can be detected efficiently. In this talk we present MicroBooNE measurement of electron attenuation using cosmic ray muons. The result yields a minimummore » electron 1/e lifetime of 18 ms under typical operating conditions, which is long compared to the maximum drift time of 2.3 ms.« less
29 CFR 778.100 - The maximum-hours provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false The maximum-hours provisions. 778.100 Section 778.100 Labor... Requirements Introductory § 778.100 The maximum-hours provisions. Section 7(a) of the Act deals with maximum... specifically exempt from its overtime pay requirements. It prescribes the maximum weekly hours of work...
Square cants from round bolts without slabs or sawdust
Peter Koch
1960-01-01
For maximum efficiency a headrig for converting bark-free bolts into cants must (1) have a fast cycle time, (2) require minimum handling of bolts and refuse, and (3) convert the volume represented by slabs and kerf into a salable byproduct.
Studies on maximum yield of wheat for the controlled environments of space
NASA Technical Reports Server (NTRS)
Bugbee, B. G.; Salisbury, F. B.
1986-01-01
The economic feasibility of using food-producing crop plants in a closed ecological Life-Support System (CELSS) will ultimately depend on the energy and area (or volume) required to provide the nutritional requirements for each person. Energy and area requirements are, to some extent, inversely related; that is, an increased energy input results in a decreased area requirement and vice versa. A major goal of the research effort was to determine the controlled-environment good-production efficiency of wheat per unit area, per unit time, and per unit energy input.
Times of Maximum Light: The Passband Dependence
NASA Astrophysics Data System (ADS)
Joner, M. D.; Laney, C. D.
2004-05-01
We present UBVRIJHK light curves for the dwarf Cepheid variable star AD Canis Minoris. These data are part of a larger project to determine absolute magnitudes for this class of stars. Our figures clearly show changes in the times of maximum light, the amplitude, and the light curve morphology that are dependent on the passband used in the observation. Note that when data from a variety of passbands are used in studies that require a period analysis or that search for small changes in the pulsational period, it is easy to introduce significant systematic errors into the results. We thank the Brigham Young University Department of Physics and Astronmy for continued support of our research. We also acknowledge the South African Astronomical Observatory for time granted to this project.
Botanical Evidence of the Modern History of Nisqually Glacier, Washington
Sigafoos, Robert S.; Hendricks, E.L.
1961-01-01
A knowledge of the areas once occupied by mountain glaciers reveals at least part of the past behavior of these glaciers. From this behavior, inferences of past climate can be drawn. The maximum advance of Nisqually Glacier in the last thousand years was located, and retreat from this point is believed to have started about 1840. The maximum downvalley position of the glacier is marked by either a prominent moraine or by a line of difference between stands of trees of strikingly different size and significantly different age. The thousand-year age of the forest beyond the moraine or line between abutting stands represents the minimum time since the surface was glaciated. This age is based on the age of the oldest trees, plus an estimated interval required for the formation of humus, plus evidence of an ancient fire, plus an interval of deposition of pyroclastics. The estimate of the date when Nisqually Glacier began to retreat from its maximum advance is based upon the ages of the oldest trees plus an interval of 5 years estimated as the time required for the establishment of trees on stable moraines. This interval was derived from a study of the ages of trees growing at locations of known past positions of the glacier. Reconnaissance studies were made on moraines formed by Emmons and Tahoma Glaciers. Preliminary analyses of these data suggest that Emmons Glacier started to recede from its maximum advance in about 1745. Two other upvalley moraines mark positions from which recession started about 1849 and 1896. Ages of trees near Tahoma Glacier indicate that it started to recede from its position of maximum advance in about 1635. About 1835 Tahoma Glacier started to recede again from another moraine formed by a readvance that ter minated near the 1635 position.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
48 CFR 7.107 - Additional requirements for acquisitions involving bundling.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the Government. However, because of the potential impact on small business participation, the head of... performance or efficiency, reduction in acquisition cycle times, better terms and conditions, and any other...; and (2) The acquisition strategy provides for maximum practicable participation by small business...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 3 2012-10-01 2012-10-01 false Additional construction requirements for steel pipe using alternative maximum allowable operating pressure. 192.328 Section 192.328 Transportation... Lines and Mains § 192.328 Additional construction requirements for steel pipe using alternative maximum...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 3 2014-10-01 2014-10-01 false Additional construction requirements for steel pipe using alternative maximum allowable operating pressure. 192.328 Section 192.328 Transportation... Lines and Mains § 192.328 Additional construction requirements for steel pipe using alternative maximum...
Separation-Compliant, Optimal Routing and Control of Scheduled Arrivals in a Terminal Airspace
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.
2013-01-01
We address the problem of navigating a set (fleet) of aircraft in an aerial route network so as to bring each aircraft to its destination at a specified time and with minimal distance separation assured between all aircraft at all times. The speed range, initial position, required destination, and required time of arrival at destination for each aircraft are assumed provided. Each aircraft's movement is governed by a controlled differential equation (state equation). The problem consists in choosing for each aircraft a path in the route network and a control strategy so as to meet the constraints and reach the destination at the required time. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver. The proposed model is first step toward increasing the fidelity of continuous time control models of air traffic in a terminal airspace. The Pontryagin Maximum Principle implies the polygonal shape of those portions of the state trajectories away from those states in which one or more aircraft pair are at minimal separation. The model also confirms the intuition that, the narrower the allowed speed ranges of the aircraft, the smaller the space of optimal solutions, and that an instance of the optimal control problem may not have a solution at all (i.e., no control strategy that meets the separation requirement and other constraints).
Cytidine 5'-diphosphate reductase activity in phytohemagglutinin stimulated human lymphocytes.
Tyrsted, G; Gamulin, V
1979-01-01
The optimal conditions and the effect of deoxyribonucleoside triphosphates were determined for CDP reductase activity in PHA-stimulated lymphocytes. The enzymatic reaction showed an absolute requirement for ATP. In the absence of ATP, only dATP showed a minor stimulation of the reduction of CDP to dCDP. During transformation the CDP reductase activity reached a maximum at the same time as the four deoxyribonucleoside triphosphate pools, corresponding to mid S-phase at about 50 h after PHA addition. The DNA polymerase activity reached a maximum at 57 h. PMID:424294
Grein, Tanja A; Loewe, Daniel; Dieken, Hauke; Salzig, Denise; Weidner, Tobias; Czermak, Peter
2018-05-01
Oncolytic viruses offer new hope to millions of patients with incurable cancer. One promising class of oncolytic viruses is Measles virus, but its broad administration to cancer patients is currently hampered by the inability to produce the large amounts of virus needed for treatment (10 10 -10 12 virus particles per dose). Measles virus is unstable, leading to very low virus titers during production. The time of infection and time of harvest are therefore critical parameters in a Measles virus production process, and their optimization requires an accurate online monitoring system. We integrated a probe based on dielectric spectroscopy (DS) into a stirred tank reactor to characterize the Measles virus production process in adherent growing Vero cells. We found that DS could be used to monitor cell adhesion on the microcarrier and that the optimal virus harvest time correlated with the global maximum permittivity signal. In 16 independent bioreactor runs, the maximum Measles virus titer was achieved approximately 40 hr after the permittivity maximum. Compared to an uncontrolled Measles virus production process, the integration of DS increased the maximum virus concentration by more than three orders of magnitude. This was sufficient to achieve an active Measles virus concentration of > 10 10 TCID 50 ml -1 . © 2017 Wiley Periodicals, Inc.
Preliminary Cost Benefit Assessment of Systems for Detection of Hazardous Weather. Volume I,
1981-07-01
not be sufficient for adequate stream flow forecasting , it has important potential for real - time flash flood warning. This was illustrated by the 1977...provide a finer spatial resolution of the gridded data. See Table 9. 42 The results of a demonstration of the real - time capabilities of a radar-man system ...detailed real time measurement capabilities and scope for quantitative forecasting is most likely to provide the degree of lead time required if maximum
Code of Federal Regulations, 2011 CFR
2011-01-01
... 0.2 nautical miles; (iv) The aircraft's SDA must be 2; and (v) The aircraft's SIL must be 3. (2... geometric position no later than 2.0 seconds from the time of measurement of the position to the time of transmission. (2) Within the 2.0 total latency allocation, a maximum of 0.6 seconds can be uncompensated...
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
A time dependent numerical formulation was derived for sound propagation in a two dimensional straight soft-walled duct in the absence of mean flow. The time dependent governing acoustic-difference equations and boundary conditions were developed along with the maximum stable time increment. Example calculations were presented for sound attenuation in hard and soft wall ducts. The time dependent analysis were found to be superior to the conventional steady numerical analysis because of much shorter solution times and the elimination of matrix storage requirements.
Stamatakis, Alexandros
2006-11-01
RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak
Seed crop frequency in northeastern Wisconsin
Richard M. Godman; Gilbert A. Mattson
1992-01-01
Knowing the frequency of good seed crops is important in regenerating northern hardwood species, particularly those that require site preparation and special cutting methods. It is also desirable to know the maximum time that might be expected between poor crops to help schedule silvicultural treatment or supplemental seeding.
46 CFR 111.52-3 - Systems below 1500 kilowatts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...
46 CFR 111.52-3 - Systems below 1500 kilowatts.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...
Breuckmann, F; Remberg, F; Böse, D; Lichtenberg, M; Kümpers, P; Pavenstädt, H; Waltenberger, J; Fischer, D
2016-03-01
This study aimed to analyze guideline adherence in the timing of invasive management for myocardial infarction without persistent ST-segment elevation (NSTEMI) in two exemplary German centers, comparing an urban university maximum care facility and a rural regional primary care facility. All patients diagnosed as having NSTEMI during 2013 were retrospectively enrolled in two centers: (1) site I, a maximum care center in an urban university setting, and (b) site II, a primary care center in a rural regional care setting. Data acquisition included time intervals from admission to invasive management, risk criteria, rate of intervention, and medical therapy. The median time from admission to coronary angiography was 12.0 h (site I) or 17.5 h (site II; p = 0.17). Guideline-adherent timing was achieved in 88.1 % (site I) or 82.9 % (site II; p = 0.18) of cases. Intervention rates were high in both sites (site I-75.5 % vs. site II-75.3 %; p = 0.85). Adherence to recommendations of medical therapy was high and comparable between the two sites. In NSTEMI or high-risk acute coronary syndromes without persistent ST-segment elevation, guideline-adherent timing of invasive management was achieved in about 85 % of cases, and was comparable between urban maximum and rural primary care settings. Validation by the German Chest Pain Unit Registry including outcome analysis is required.
Efficiency and Safety: The Best Time to Valve a Plaster Cast.
Steiner, Samuel R H; Gendi, Kirollos; Halanski, Matthew A; Noonan, Kenneth J
2018-04-18
The act of applying, univalving, and spreading a plaster cast to accommodate swelling is commonly performed; however, cast saws can cause thermal and/or abrasive injury to the patient. This study aims to identify the optimal time to valve a plaster cast so as to reduce the risk of cast-saw injury and increase spreading efficiency. Plaster casts were applied to life-sized pediatric models and were univalved at set-times of 5, 8, 12, or 25 minutes. Outcome measures included average and maximum force applied during univalving, blade-to-skin touches, cut time, force needed to spread, number of spread attempts, spread completeness, spread distance, saw blade temperature, and skin surface temperature. Casts allowed to set for ≥12 minutes had significantly fewer blade-to-skin touches compared with casts that set for <12 minutes (p < 0.001). For average and maximum saw blade force, no significant difference was observed between individual set-times. However, in a comparison of the shorter group (<12 minutes) and the longer group (≥12 minutes), the longer group had a higher average force (p = 0.009) but a lower maximum force (p = 0.036). The average temperature of the saw blade did not vary between groups. The maximum force needed to "pop," or spread, the cast was greater for the 5-minute and 8-minute set-times. Despite requiring more force to spread the cast, 0% of attempts at 5 minutes and 54% of attempts at 8 minutes were successful in completely spreading the cast, whereas 100% of attempts at 12 and 25 minutes were successful. The spread distance was greatest for the 12-minute set-time at 5.7 mm. Allowing casts to set for 12 minutes is associated with decreased blade-to-skin contact, less maximum force used with the saw blade, and a more effective spread. Adherence to the 12-minute interval could allow for fewer cast-saw injuries and more effective spreading.
Space station microscopy: Beyond the box
NASA Technical Reports Server (NTRS)
Hunter, N. R.; Pierson, Duane L.; Mishra, S. K.
1993-01-01
Microscopy aboard Space Station Freedom poses many unique challenges for in-flight investigations. Disciplines such as material processing, plant and animal research, human reseach, enviromental monitoring, health care, and biological processing have diverse microscope requirements. The typical microscope not only does not meet the comprehensive needs of these varied users, but also tends to require excessive crew time. To assess user requirements, a comprehensive survey was conducted among investigators with experiments requiring microscopy. The survey examined requirements such as light sources, objectives, stages, focusing systems, eye pieces, video accessories, etc. The results of this survey and the application of an Intelligent Microscope Imaging System (IMIS) may address these demands for efficient microscopy service in space. The proposed IMIS can accommodate multiple users with varied requirements, operate in several modes, reduce crew time needed for experiments, and take maximum advantage of the restrictive data/ instruction transmission environment on Freedom.
Code of Federal Regulations, 2012 CFR
2012-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps not...
Code of Federal Regulations, 2013 CFR
2013-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps not...
Code of Federal Regulations, 2014 CFR
2014-10-01
... vent, maximum trap size, and ghost panel requirements. 697.21 Section 697.21 Wildlife and Fisheries... identification and marking, escape vent, maximum trap size, and ghost panel requirements. (a) Gear identification... Administrator finds to be consistent with paragraph (c) of this section. (d) Ghost panel. (1) Lobster traps not...
TIME-TAG mode of STIS observations using the MAMA detectors
NASA Astrophysics Data System (ADS)
Sahu, Kailash; Danks, Anthony; Baum, Stefi; Balzano, Vicki; Kraemer, Steve; Kutina, Ray; Sears, William
1995-04-01
We summarize the time-tag mode of STIS observations using the MAMA detectors, both in imaging and spectroscopic modes. After a brief outline on the MAMA detector characteristics and the astronomical applications of the time-tag mode, the general philosophy and the details of the data management strategy are described in detail. The GO specifications, and the consequent different modes of data transfer strategy are outlined. Restrictions on maximum data rates, integration times, and BUFFER-TIME requirements are explained. A few cases where the subarray option would be useful are outlined.
46 CFR 111.52-3 - Systems below 1500 kilowatts.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The following short-circuit assumptions must be made for a system with an aggregate generating capacity below... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...
40 CFR 63.11583 - What are my monitoring requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... applicable, and the following: (1) Locate the pressure sensor(s) in, or as close as possible to, a position... comparing the sensor output to redundant sensor output. (4) Conduct calibration checks any time the sensor exceeds the manufacturer's specified maximum operating pressure range or install a new pressure sensor. (5...
30 CFR 36.25 - Engine exhaust system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Engine exhaust system. 36.25 Section 36.25... EQUIPMENT Construction and Design Requirements § 36.25 Engine exhaust system. (a) Construction. The exhaust system of the engine shall be designed to withstand an internal pressure equal to 4 times the maximum...
On the maximum energy of shock-accelerated cosmic rays at ultra-relativistic shocks
NASA Astrophysics Data System (ADS)
Reville, B.; Bell, A. R.
2014-04-01
The maximum energy to which cosmic rays can be accelerated at weakly magnetised ultra-relativistic shocks is investigated. We demonstrate that for such shocks, in which the scattering of energetic particles is mediated exclusively by ion skin-depth scale structures, as might be expected for a Weibel-mediated shock, there is an intrinsic limit on the maximum energy to which particles can be accelerated. This maximum energy is determined from the requirement that particles must be isotropized in the downstream plasma frame before the mean field transports them far downstream, and falls considerably short of what is required to produce ultra-high-energy cosmic rays. To circumvent this limit, a highly disorganized field is required on larger scales. The growth of cosmic ray-induced instabilities on wavelengths much longer than the ion-plasma skin depth, both upstream and downstream of the shock, is considered. While these instabilities may play an important role in magnetic field amplification at relativistic shocks, on scales comparable to the gyroradius of the most energetic particles, the calculated growth rates have insufficient time to modify the scattering. Since strong modification is a necessary condition for particles in the downstream region to re-cross the shock, in the absence of an alternative scattering mechanism, these results imply that acceleration to higher energies is ruled out. If weakly magnetized ultra-relativistic shocks are disfavoured as high-energy particle accelerators in general, the search for potential sources of ultra-high-energy cosmic rays can be narrowed.
Shield, Kevin D; Gmel, Gerrit; Gmel, Gerhard; Mäkelä, Pia; Probst, Charlotte; Room, Robin; Rehm, Jürgen
2017-09-01
Low-risk alcohol drinking guidelines require a scientific basis that extends beyond individual or group judgements of risk. Life-time mortality risks, judged against established thresholds for acceptable risk, may provide such a basis for guidelines. Therefore, the aim of this study was to estimate alcohol mortality risks for seven European countries based on different average daily alcohol consumption amounts. The maximum acceptable voluntary premature mortality risk was determined to be one in 1000, with sensitivity analyses of one in 100. Life-time mortality risks for different alcohol consumption levels were estimated by combining disease-specific relative risk and mortality data for seven European countries with different drinking patterns (Estonia, Finland, Germany, Hungary, Ireland, Italy and Poland). Alcohol consumption data were obtained from the Global Information System on Alcohol and Health, relative risk data from meta-analyses and mortality information from the World Health Organization. The variation in the life-time mortality risk at drinking levels relevant for setting guidelines was less than that observed at high drinking levels. In Europe, the percentage of adults consuming above a risk threshold of one in 1000 ranged from 20.6 to 32.9% for women and from 35.4 to 54.0% for men. Life-time risk of premature mortality under current guideline maximums ranged from 2.5 to 44.8 deaths per 1000 women in Finland and Estonia, respectively, and from 2.9 to 35.8 deaths per 1000 men in Finland and Estonia, respectively. If based upon an acceptable risk of one in 1000, guideline maximums for Europe should be 8-10 g/day for women and 15-20 g/day for men. If low-risk alcohol guidelines were based on an acceptable risk of one in 1000 premature deaths, then maximums for Europe should be 8-10 g/day for women and 15-20 g/day for men, and some of the current European guidelines would require downward revision. © 2017 Society for the Study of Addiction.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
Temporal properties of the myopic response to defocus in the guinea pig.
Leotta, Amelia J; Bowrey, Hannah E; Zeng, Guang; McFadden, Sally A
2013-05-01
Hyperopic defocus induces myopia in all species tested and is believed to underlie the progression of human myopia. We determined the temporal properties of the effects of hyperopic defocus in a mammalian eye. In Experiment 1, the rise and decay time of the responses elicited by hyperopic defocus were calculated in 111 guinea pigs by giving repeated episodes of monocular -4 D lens wear (from 5 to 6 days of age for 12 days) interspersed with various dark intervals. In Experiment 2, the decay time constant was calculated in 152 guinea pigs when repeated periods of monocular -5 D lens-wear (from 4 days of age for 7 days) were interrupted with free viewing periods of different lengths. At the end of the lens-wear period, ocular parameters were measured and time constants were calculated relative to the maximum response induced by continuous lens wear. When hyperopic defocus was experienced with dark intervals between episodes, the time required to induce 50% of the maximum achievable myopia and ocular elongation was at most 30 min. Saturated 1 h episodes took at least 22 h for refractive error and 31 h for ocular length, to decay to 50% of the maximum response. However, the decay was an order of magnitude faster when hyperopic defocus episodes were interrupted with a daily free viewing period, with only 36 min required to reduce relative myopia and ocular elongation by 50%. Hyperopic defocus causes myopia with brief exposures and is very long lasting in the absence of competing signals. However, this myopic response rapidly decays if interrupted by periods of 'normal viewing' at least 30 min in length, wherein ocular growth appears to be guided preferentially by the least amount of hyperopic defocus experienced. Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
The use of artificial intelligence techniques to improve the multiple payload integration process
NASA Technical Reports Server (NTRS)
Cutts, Dannie E.; Widgren, Brian K.
1992-01-01
A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
NASA Astrophysics Data System (ADS)
Degaudenzi, Riccardo; Vanghi, Vieri
1994-02-01
In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
NASA Astrophysics Data System (ADS)
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.
Duration of shoot elongation in Scots pine varies within the crown and between years.
Schiestl-Aalto, Pauliina; Nikinmaa, Eero; Mäkelä, Annikki
2013-10-01
Shoot elongation in boreal and temperate trees typically follows a sigmoid pattern where the onset and cessation of growth are related to accumulated effective temperature (thermal time). Previous studies on leader shoots suggest that while the maximum daily growth rate depends on the availability of resources to the shoot, the duration of the growth period may be an adaptation to long-term temperature conditions. However, other results indicate that the growth period may be longer in faster growing lateral shoots with higher availability of resources. This study investigates the interactions between the rate of elongation and the duration of the growth period in units of thermal time in lateral shoots of Scots pine (Pinus sylvestris). Length development of 202 lateral shoots were measured approximately three times per week during seven growing seasons in 2-5 trees per year in a mature stand and in three trees during one growing season in a sapling stand. A dynamic shoot growth model was adapted for the analysis to determine (1) the maximum growth rate and (2) the thermal time reached at growth completion. The relationship between those two parameters and its variation between trees and years was analysed using linear mixed models. The shoots with higher maximum growth rate within a crown continued to grow for a longer period in any one year. Higher July-August temperature of the previous summer implied a higher requirement of thermal time for growth completion. The results provide evidence that the requirement of thermal time for completion of lateral shoot extension in Scots pine may interact with resource availability to the shoot both from year to year and among shoots in a crown each year. If growing season temperatures rise in the future, this will affect not only the rate of shoot growth but its duration also.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of Piping Systems § 61.15-1 Scope. In conducting hydrostatic tests on piping, the required test pressure shall be maintained for a sufficient length of time to permit an inspection to be made of all... establishing the maximum allowable working pressure of the system. [CGFR 68-82, 33 FR 18890, Dec. 18, 1968, as...
40 CFR 63.9921 - What are the installation, operation and maintenance requirements for my monitors?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) For the pressure drop CPMS, you must: (i) Locate the pressure sensor(s) in or as close to a position... calibration quarterly and transducer calibration monthly. (v) Conduct calibration checks any time the sensor exceeds the manufacturer's specified maximum operating pressure range, or install a new pressure sensor...
40 CFR 63.9921 - What are the installation, operation and maintenance requirements for my monitors?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) For the pressure drop CPMS, you must: (i) Locate the pressure sensor(s) in or as close to a position... calibration quarterly and transducer calibration monthly. (v) Conduct calibration checks any time the sensor exceeds the manufacturer's specified maximum operating pressure range, or install a new pressure sensor...
40 CFR 63.9921 - What are the installation, operation and maintenance requirements for my monitors?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) For the pressure drop CPMS, you must: (i) Locate the pressure sensor(s) in or as close to a position... calibration quarterly and transducer calibration monthly. (v) Conduct calibration checks any time the sensor exceeds the manufacturer's specified maximum operating pressure range, or install a new pressure sensor...
40 CFR 63.9921 - What are the installation, operation and maintenance requirements for my monitors?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) For the pressure drop CPMS, you must: (i) Locate the pressure sensor(s) in or as close to a position... calibration quarterly and transducer calibration monthly. (v) Conduct calibration checks any time the sensor exceeds the manufacturer's specified maximum operating pressure range, or install a new pressure sensor...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
40 CFR 63.704 - Compliance and monitoring requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nonregenerative carbon adsorber is used to comply with § 63.703(c)(1), the site-specific operating parameter value... compliance with § 63.703(c), (e)(1)(i), or (f)(1)(i), as appropriate. (5) For each nonregenerative carbon... site-specific operating parameter the carbon replacement time interval, as determined by the maximum...
18 CFR 131.50 - Reports of proposals received.
Code of Federal Regulations, 2010 CFR
2010-04-01
... revised, effective at the time of the next e-filing release during the Commission's next fiscal year. For..., including maximum life and average life of sinking fund issue; (e) Dividend or interest rate; (f) Call... the filing is required unless there is a request for privileged or protected treatment or the document...
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
47 CFR 15.709 - General technical requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to the transmit antenna. If transmitting antennas of directional gain greater than 6 dBi are used, the maximum conducted output power shall be reduced by the amount in dB that the directional gain of... 100 kHz band during any time interval of continuous transmission: (i) Fixed devices: 12.2 dBm. (ii...
Nordio, Sara; Bernitsas, Evanthia; Meneghello, Francesca; Palmer, Katie; Stabile, Maria Rosaria; Dipietro, Laura; Di Stadio, Arianna
2018-04-21
Speech disorders are common in patients with Multiple Sclerosis (MS). They can be assessed with several methods, which are however expensive, complex, and not easily accessible to physicians during routine clinic visits. This study aimed at measuring maximum phonation times, maximum expiratory times, and articulation abilities scores in patients with MS compared to healthy subjects and at investigating if any of these parameters could be used as a measure of MS progression. 50 MS patients and 50 gender- and age-matched healthy controls were enrolled in the study. Maximum expiratory times and maximum phonation times were collected from both groups. Articulation abilities were evaluated using the articulation subtest from the Fussi assessment (dysarthria scores). MS patients were evaluated with the Expanded Disability Status Scale (EDSS). Correlations between EDSS scores and maximum expiratory times, maximum phonation times, and dysarthria scores were calculated. EDSS scores of MS patients ranged from 4.5 to 7.5. In MS patients, maximum expiratory times, maximum phonation times, and dysarthria scores were significantly altered compared to healthy controls. Moreover, the EDSS scores were correlated with the maximum expiratory times; the maximum expiratory times were correlated with the maximum phonation times, and the maximum phonation times were correlated with the dysarthria scores. As the expiratory times were significantly correlated with the EDSS scores, they could be used to measure the severity of MS and to monitor its progression. Copyright © 2018 Elsevier B.V. All rights reserved.
High quantum yield of the Egyptian blue family of infrared phosphors (MCuSi4O10, M = Ca, Sr, Ba)
NASA Astrophysics Data System (ADS)
Berdahl, Paul; Boocock, Simon K.; Chan, George C.-Y.; Chen, Sharon S.; Levinson, Ronnen M.; Zalich, Michael A.
2018-05-01
The alkaline earth copper tetra-silicates, blue pigments, are interesting infrared phosphors. The Ca, Sr, and Ba variants fluoresce in the near-infrared (NIR) at 909, 914, and 948 nm, respectively, with spectral widths on the order of 120 nm. The highest quantum yield ϕ reported thus far is ca. 10%. We use temperature measurements in sunlight to determine this parameter. The yield depends on the pigment loading (mass per unit area) ω with values approaching 100% as ω → 0 for the Ca and Sr variants. Although maximum quantum yield occurs near ω = 0, maximum fluorescence occurs near ω = 70 g m-2, at which ϕ = 0.7. The better samples show fluorescence decay times in the range of 130 to 160 μs. The absorbing impurity CuO is often present. Good phosphor performance requires long fluorescence decay times and very low levels of parasitic absorption. The strong fluorescence enhances prospects for energy applications such as cooling of sunlit surfaces (to reduce air conditioning requirements) and luminescent solar concentrators.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
NASA Astrophysics Data System (ADS)
Benfenati, Francesco; Beretta, Gian Paolo
2018-04-01
We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).
Aging properties of Kodak type 101 emulsions
NASA Technical Reports Server (NTRS)
Dohne, B.; Feldman, U.; Neupert, W.
1984-01-01
Aging tests for several batches of Kodak type 101 emulsion show that storage conditions significantly influence how well the film will maintain its sensitometric properties, with sensitivity and density increasing to a maximum during this period. Any further aging may result in higher fog levels and sensitivity loss. It is noted that storage in an environment free of photographically active compounds allows film property optimization, and that film batches with different sensitivities age differently. Emulsions with maximum 1700-A sensitivity are 2.5 times faster than those at the low end of the sensitivity scale. These sensitive emulsions exhibit significantly accelerated changes in aging properties. Their use in space applications requires careful consideration of time and temperature profiles, encouraging the use of less sensitive emulsions when the controllability of these factors is limited.
Temporal properties of compensation for positive and negative spectacle lenses in chicks.
Zhu, Xiaoying; Wallman, Josh
2009-01-01
Chicks' eyes rapidly compensate for defocus imposed by spectacle lenses by changing their rate of elongation and their choroidal thickness. Compensation may involve internal emmetropization signals that rise and become saturated during episodes of lens wear and decline between episodes. The time constants of these signals were measured indirectly by measuring the magnitude of lens compensation in refractive error and ocular dimensions as a function of the duration of episodes and the intervals between the episodes. First, in a study of how quickly the signals rose, chicks were subjected to episodes of lens-wear of various durations (darkness otherwise), and the duration required to cause a half-maximum effect (rise-time) was estimated. Second, in a study of how quickly the signals declined, various dark intervals were imposed between episodes of lens-wear, and the interval required to reduce the maximum effect by half (fall-time) was estimated. The rise-times for the rate of ocular elongation and choroidal thickness were approximately 3 minutes for positive and negative lenses. The fall-times had a broad range of time courses: Positive lenses caused an enduring inhibition of ocular elongation with a fall-time of 24 hours. In contrast, negative lenses caused a transient stimulation of ocular elongation with a fall-time of 0.4 hour. The effects of episodes of defocus rise rapidly with episode duration to an asymptote and decline between episodes, with the time course depending strongly on the sign of defocus and the ocular component. The complex etiology of human myopia may reflect these temporal properties.
Performance of Underwater Weldments
1990-09-05
gas or cathodic overprotection remains to be investigation. Subcritical crack propagation from corrosion fatigue must be considered. Crack propagation...toughness = .83 c. There is no redundancy so 1.8 times maximum stress or 1.0 times yield stress. Since the yield stress of the parent plate is being used...on the stress is required even though the stress will now be below yield strength in the parent plate. Since K is directly proportional to the stress
ANN based Real-Time Estimation of Power Generation of Different PV Module Types
NASA Astrophysics Data System (ADS)
Syafaruddin; Karatepe, Engin; Hiyama, Takashi
Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
NASA Astrophysics Data System (ADS)
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
The LUX experiment - trigger and data acquisition systems
NASA Astrophysics Data System (ADS)
Druszkiewicz, Eryk
2013-04-01
The Large Underground Xenon (LUX) detector is a two-phase xenon time projection chamber designed to detect interactions of dark matter particles with the xenon nuclei. Signals from the detector PMTs are processed by custom-built analog electronics which provide properly shaped signals for the trigger and data acquisition (DAQ) systems. During calibrations, both systems must be able to handle high rates and have large dynamic ranges; during dark matter searches, maximum sensitivity requires low thresholds. The trigger system uses eight-channel 64-MHz digitizers (DDC-8) connected to a Trigger Builder (TB). The FPGA cores on the digitizers perform real-time pulse identification (discriminating between S1 and S2-like signals) and event localization. The TB uses hit patterns, hit maps, and maximum response detection to make trigger decisions, which are reached within few microseconds after the occurrence of an event of interest. The DAQ system is comprised of commercial digitizers with customized firmware. Its real-time baseline suppression allows for a maximum event acquisition rate in excess of 1.5 kHz, which results in virtually no deadtime. The performance of the trigger and DAQ systems during the commissioning runs of LUX will be discussed.
Lift hysteresis at stall as an unsteady boundary-layer phenomenon
NASA Technical Reports Server (NTRS)
Moore, Franklin K
1956-01-01
Analysis of rotating stall of compressor blade rows requires specification of a dynamic lift curve for the airfoil section at or near stall, presumably including the effect of lift hysteresis. Consideration of the magnus lift of a rotating cylinder suggests performing an unsteady boundary-layer calculation to find the movement of the separation points of an airfoil fixed in a stream of variable incidence. The consideration of the shedding of vorticity into the wake should yield an estimate of lift increment proportional to time rate of change of angle of attack. This increment is the amplitude of the hysteresis loop. An approximate analysis is carried out according to the foregoing ideas for a 6:1 elliptic airfoil at the angle of attack for maximum lift. The assumptions of small perturbations from maximum lift are made, permitting neglect of distributed vorticity in the wake. The calculated hysteresis loop is counterclockwise. Finally, a discussion of the forms of hysteresis loops is presented; and, for small reduced frequency of oscillation, it is concluded that the concept of a viscous "time lag" is appropriate only for harmonic variations of angle of attack with time at mean conditions other than maximum lift.
Littoral transport in the surf zone elucidated by an Eulerian sediment tracer.
Duane, D.B.; James, W.R.
1980-01-01
An Eulerian, or time integration, sand tracer experiment was designed and carried out in the surf zone near Pt. Mugu, California on April 19, 1972. Data indicate that conditions of stationarity and finite boundaries required for proper application of Eulerian tracer theory exist for short time periods in the surf zone. Grain counts suggest time required for tracer sand to attain equilibrium concentration is on the order of 30-60 minutes. Grain counts also indicate transport (discharge) was strongly dependent upon grain size, with the maximum rate occurring in the size 2.5-2.75 phi, decreasing to both finer and coarser sizes. The measured instantaneous transport was at the annual rate of 2.4 x 106 m3/yr.- Authors
2015-12-01
response time re- quirements and in additional calibration requirements for DCFM that may create unexpected la - tency and latency jitter that can...manage the flight path of the aircraft. For more information about sensor correlation and fusion processes, the Air University New World Vistas ...request/reply actions. We specify its la - tency as a minimum and maximum of 300 ms. SADataServiceProtocol: an abstraction of the SA data service as a
McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy
2007-08-01
An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.
Structures and materials technology issues for reusable launch vehicles
NASA Technical Reports Server (NTRS)
Dixon, S. C.; Tenney, D. R.; Rummler, D. R.; Wieting, A. R.; Bader, R. M.
1985-01-01
Projected space missions for both civil and defense needs require significant improvements in structures and materials technology for reusable launch vehicles: reductions in structural weight compared to the Space Shuttle Orbiter of up to 25% or more, a possible factor of 5 or more increase in mission life, increases in maximum use temperature of the external surface, reusable containment of cryogenic hydrogen and oxygen, significant reductions in operational costs, and possibly less lead time between technology readiness and initial operational capability. In addition, there is increasing interest in hypersonic airbreathing propulsion for launch and transmospheric vehicles, and such systems require regeneratively cooled structure. The technology issues are addressed, giving brief assessments of the state-of-the-art and proposed activities to meet the technology requirements in a timely manner.
33 CFR 154.814 - Facility requirements for vessel vapor overpressure and vacuum protection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... vapor at a rate of not less than 1.25 times the facility's maximum liquid transfer rate for cargo for... GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) POLLUTION FACILITIES TRANSFERRING OIL OR HAZARDOUS... in the vessel's cargo tanks within this range at any cargo transfer rate less than or equal to the...
46 CFR 162.028-3 - Requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... which are subjected to pressure, exclusive of the hose, shall be at least five times the maximum working... which will withstand a minimum bursting pressure of 6,000 p.s.i., and a discharge hose or tube which will withstand a minimum bursting pressure of 5,000 p.s.i. The hose shall be constructed with either a...
46 CFR 162.028-3 - Requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... which are subjected to pressure, exclusive of the hose, shall be at least five times the maximum working... which will withstand a minimum bursting pressure of 6,000 p.s.i., and a discharge hose or tube which will withstand a minimum bursting pressure of 5,000 p.s.i. The hose shall be constructed with either a...
46 CFR 162.028-3 - Requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... which are subjected to pressure, exclusive of the hose, shall be at least five times the maximum working... which will withstand a minimum bursting pressure of 6,000 p.s.i., and a discharge hose or tube which will withstand a minimum bursting pressure of 5,000 p.s.i. The hose shall be constructed with either a...
One-time pad, complexity of verification of keys, and practical security of quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N., E-mail: sergei.molotkov@gmail.com
2016-11-15
A direct relation between the complexity of the complete verification of keys, which is one of the main criteria of security in classical systems, and a trace distance used in quantum cryptography is demonstrated. Bounds for the minimum and maximum numbers of verification steps required to determine the actual key are obtained.
33 CFR Schedule I to Subpart A of... - Vessels Transiting U.S. Waters
Code of Federal Regulations, 2011 CFR
2011-07-01
... required to alter the course 90 degrees with maximum rudder angle and constant power settings: (2) The time...: (A) Calm weather—wind 10 knots or less, calm sea; (B) No current; (C) Deep water conditions—water... any of the following conditions, on which the maneuvering is based, are varied: (a) Calm weather—wind...
33 CFR Schedule I to Subpart A of... - Vessels Transiting U.S. Waters
Code of Federal Regulations, 2010 CFR
2010-07-01
... required to alter the course 90 degrees with maximum rudder angle and constant power settings: (2) The time...: (A) Calm weather—wind 10 knots or less, calm sea; (B) No current; (C) Deep water conditions—water... any of the following conditions, on which the maneuvering is based, are varied: (a) Calm weather—wind...
On numbers of clones needed for managing risks in clonal forestry
J. Bishir; J.H. Roberds
1999-01-01
An important question in clonal forestry concerns the number of clones needed in plantations to protect against catastrophic failure while at the same time achieving the uniform stands, high yields, and ease of management associated with this management system. This paper looks at how the required number of clones needed to achieve a predetermined maximum acceptable...
Disintegration impact on sludge digestion process.
Dauknys, Regimantas; Rimeika, Mindaugas; Jankeliūnaitė, Eglė; Mažeikienė, Aušra
2016-11-01
The anaerobic sludge digestion is a widely used method for sludge stabilization in wastewater treatment plant. This process can be improved by applying the sludge disintegration methods. As the sludge disintegration is not investigated enough, an analysis of how the application of thermal hydrolysis affects the sludge digestion process based on full-scale data was conducted. The results showed that the maximum volatile suspended solids (VSS) destruction reached the value of 65% independently on the application of thermal hydrolysis. The average VSS destruction increased by 14% when thermal hydrolysis was applied. In order to have the maximum VSS reduction and biogas production, it is recommended to keep the maximum defined VSS loading of 5.7 kg VSS/m(3)/d when the thermal hydrolysis is applied and to keep the VSS loading between 2.1-2.4 kg VSS/m(3)/d when the disintegration of sludge is not applied. The application of thermal hydrolysis leads to an approximately 2.5 times higher VSS loading maintenance comparing VSS loading without the disintegration; therefore, digesters with 1.8 times smaller volume is required.
Detrending the realized volatility in the global FX market
NASA Astrophysics Data System (ADS)
Schmidt, Anatoly B.
2009-05-01
There has been growing interest in realized volatility (RV) of financial assets that is calculated using intra-day returns. The choice of optimal time grid for these calculations is not trivial and generally requires analysis of RV dependence on the grid spacing (so-called RV signature). Typical RV signatures have a maximum at the finest time grid spacing available, which is attributed to the microstructure effects. This maximum decays into a plateau at lower frequencies, which implies (almost) stationary return variance. We found that the RV signatures in the modern global FX market may have no plateau or even have a maximum at lower frequencies. Simple averaging methods used to address the microstructure effects in equities have no practical effect on the FX RV signatures. We show that local detrending of the high-frequency FX rate samples yields RV signatures with a pronounced plateau. This implies that FX rates can be described with a Brownian motion having non-stationary trend and stationary variance. We point at a role of algorithmic trading as a possible cause of micro-trends in FX rates.
NASA Technical Reports Server (NTRS)
Brock, T. G.; Kaufman, P. B.
1988-01-01
Starch in pulvinus amyloplasts of barley (Hordeum vulgare cv Larker) disappears when 45-day-old, light-grown plants are given 5 days of continuous darkness. The effect of this loss on the pulvinus graviresponse was evaluated by following changes in the kinetics of response during the 5-day dark period. Over 5 days of dark pretreatment, the lag to initial graviresponse and the subsequent half-time to maximum steady state bending rate increased significantly while the maximum bending rate did not change. The change in response to applied indoleacetic acid (100 micromolar) plus gibberellic acid (10 micromolar) without gravistimulation, under identical dark pretreatments, was used as a model system for the response component of gravitropism. Dark pretreatment did not change the lag to initial response following hormone application to vertical pulvini, but both the maximum bending rate and the half-time to the maximum rate were significantly reduced. Also, after dark pretreatment, significant bending responses following hormone application were observed in vertical segments with or without added sucrose, while gravistimulation produced a response only if segments were given sucrose. These results indicate that starch-filled amyloplasts are required for the graviresponse of barley pulvini and suggest that they function in the stimulus perception and signal transduction components of gravitropism.
NASA Astrophysics Data System (ADS)
Geete, Ankur; Dubey, Akash; Sharma, Ankush; Dubey, Anshul
2018-05-01
In this research work, compound parabolic solar collector (CPC) with evacuated tubes is fabricated. Main benefit of CPC is that there is no requirement of solar tracking system. With fabricated CPC; outlet temperatures of flowing fluid, instantaneous efficiencies, useful heat gain rates and inlet exergies (with and without considering Sun's cone angle) are experimentally found. Observations are taken at different time intervals (1200, 1230, 1300, 1330 and 1400 h), mass flow rates (1.15, 0.78, 0.76, 0.86 and 0.89 g/s), ambient temperatures and with various dimensions of solar collector. This research work is concluded as; maximum instantaneous efficiency is 69.87% which was obtained with 0.76 g/s flow rate of water at 1300 h and 42°C is the maximum temperature difference which was also found at same time. Maximum inlet exergies are 139.733 and 139.532 kW with and without considering Sun's cone angle at 1300 h, respectively. Best thermal performance from the fabricated CPC with evacuated tubes is found at 1300 h. Maximum inlet exergy is 141.365 kW which was found at 1300 h with 0.31 m aperture width and 1.72 m absorber pipe length.
Sludge stabilization through aerobic digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, R.B.; Smith, D.G.; Bennett, E.R.
1979-10-01
The aerobic digestion process with certain modifications is evaluated as an alternative for sludge processing capable of developing a product with characteristics required for land application. Environmental conditions, including temperature, solids concentration, and digestion time, that affect the aerobic digestion of a mixed primary sludge-trickling filter humus are investigated. Variations in these parameters that influence the characteristics of digested sludge are determined, and the parameters are optimized to: provide the maximum rate of volatile solids reduction; develop a stable, nonodorous product sludge; and provide the maximum rate of oxidation of the nitrogenous material present in the feed sludge. (3 diagrams,more » 9 graphs, 15 references, 3 tables)« less
NASA Technical Reports Server (NTRS)
Gray, D. J.
1978-01-01
Cryogenic transportation methods for providing liquid hydrogen requirements are examined in support of shuttle transportation system launch operations at Kennedy Space Center, Florida, during the time frames 1982-1991 in terms of cost and operational effectiveness. Transportation methods considered included sixteen different options employing mobile semi-trailer tankers, railcars, barges and combinations of each method. The study concludes that the most effective method of delivering liquid hydrogen from the vendor production facility in New Orleans to Kennedy Space Center includes maximum utilization of existing mobile tankers and railcars supplemented by maximum capacity mobile tankers procured incrementally in accordance with shuttle launch rates actually achieved.
Reversion phenomena of Cu-Cr alloys
NASA Technical Reports Server (NTRS)
Nishikawa, S.; Nagata, K.; Kobayashi, S.
1985-01-01
Cu-Cr alloys which were given various aging and reversion treatments were investigated in terms of electrical resistivity and hardness. Transmission electron microscopy was one technique employed. Some results obtained are as follows: the increment of electrical resistivity after the reversion at a constant temperature decreases as the aging temperature rises. In a constant aging condition, the increment of electrical resistivity after the reversion increases, and the time required for a maximum reversion becomes shorter as the reversion temperature rises. The reversion phenomena can be repeated, but its amount decreases rapidly by repetition. At first, the amount of reversion increases with aging time and reaches its maximum, and then tends to decrease again. Hardness changes by the reversion are very small, but the hardness tends to soften slightly. Any changes in transmission electron micrographs by the reversion treatment cannot be detected.
40 CFR 142.40 - Requirements for a variance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... responsibility from any requirement respecting a maximum contaminant level of an applicable national primary... maximum contaminant levels of such drinking water regulations despite application of the best technology...
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Biomechanical Analysis of the Closed Kinetic Chain Upper-Extremity Stability Test.
Tucci, Helga T; Felicio, Lilian R; McQuade, Kevin J; Bevilaqua-Grossi, Debora; Camarini, Paula Maria Ferreira; Oliveira, Anamaria S
2017-01-01
The closed kinetic chain upper-extremity stability (CKCUES) test is a functional test for the upper extremity performed in the push-up position, where individuals support their body weight on 1 hand placed on the ground and swing the opposite hand until touching the hand on the ground, then switch hands and repeat the process as fast as possible for 15 s. To study scapular kinematic and kinetic measures during the CKCUES test for 3 different distances between hands. Experimental. Laboratory. 30 healthy individuals (15 male, 15 female). Participants performed 3 repetitions of the test at 3 distance conditions: original (36 in), interacromial, and 150% interacromial distance between hands. Participants completed a questionnaire on pain intensity and perceived exertion before and after the procedures. Scapular internal/external rotation, upward/downward rotation, and posterior/anterior tilting kinematics and kinetic data on maximum force and time to maximum force were measured bilaterally in all participants. Percentage of body weight on upper extremities was calculated. Data analyses were based on the total numbers of hand touches performed for each distance condition, and scapular kinematics and kinetic values were averaged over the 3 trials. Scapular kinematics, maximum force, and time to maximum force were compared for the 3 distance conditions within each gender. Significance level was set at α = .05. Scapular internal rotation, posterior tilting, and upward rotation were significantly greater in the dominant side for both genders. Scapular upward rotation was significantly greater in original distance than interacromial distance in swing phase. Time to maximum force in women was significantly greater in the dominant side. CKCUES test kinematic and kinetic measures were not different among 3 conditions based on distance between hands. However, the test might not be suitable for initial or mild-level rehabilitation due to its challenging requirements.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
NASA Technical Reports Server (NTRS)
Ward, T. L.
1975-01-01
The future development of full capability Space Tug will impose strict requirements upon the thermal design. While requiring a reliable and reusable design, Space Tug must be capable of steady-state and transient thermal operation during any given mission for mission durations of up to seven days and potentially longer periods of time. Maximum flexibility and adaptability of Space Tug to the mission model requires that the vehicle operate within attitude constraints throughout any specific mission. These requirements were translated into a preliminary design study for a geostationary deploy and retrieve mission definition for Space Tug to determine the thermal control design requirements. Results of the study are discussed with emphasis given to some of the unique avenues pursued during the study, as well as the recommended thermal design configuration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... issued to a vessel describes the vessel, the route which it may travel, the minimum manning requirements... required to be carried, the maximum number of sailing school students and instructors and the maximum...
Recent advances in phase shifted time averaging and stroboscopic interferometry
NASA Astrophysics Data System (ADS)
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
XPAR-2 Search Mode Initial Design
2013-11-01
by an azimuth sector, an elevation sector, and out to a required maximum range. The frame-time, which is defined as the time it takes the antenna beam...continues its scan, more targets are detected and the measurements are used to form their track files, which are then updated when the beam scans over...every additional target to be tracked. Although the track update rate can be made much faster than that in the TWS mode, it is obvious that there is a
Validation of Model-Based Prognostics for Pneumatic Valves in a Demonstration Testbed
2014-10-02
predict end of life ( EOL ) and remaining useful life (RUL). The approach still follows the general estimation-prediction framework devel- oped in the...atmosphere, with linearly increasing leak area. kA2leak = Cleak (16) We define valve end of life ( EOL ) through open/close time limits of the valves, as in...represents end of life ( EOL ), and ∆kE represents remaining useful life (RUL). For valves, timing requirements are provided that de- fine the maximum
41 CFR 301-31.10 - How will my agency pay my subsistence expenses?
Code of Federal Regulations, 2011 CFR
2011-07-01
... maximum lodging amount applicable to the locality .75 times the maximum lodging amount applicable to the locality .5 times the maximum lodging amount applicable to the locality. Payment for lodging, meals, and other per diem expenses The maximum per diem rate applicable to the locality .75 times the maximum per...
Self-tapping ability of carbon fibre reinforced polyetheretherketone suture anchors.
Feerick, Emer M; Wilson, Joanne; Jarman-Smith, Marcus; Ó'Brádaigh, Conchur M; McGarry, J Patrick
2014-10-01
An experimental and computational investigation of the self-tapping ability of carbon fibre reinforced polyetheretherketone (CFR-PEEK) has been conducted. Six CFR-PEEK suture anchor designs were investigated using PEEK-OPTIMA® Reinforced, a medical grade of CFR-PEEK. Experimental tests were conducted to investigate the maximum axial force and torque required for self-taping insertion of each anchor design. Additional experimental tests were conducted for some anchor designs using pilot holes. Computational simulations were conducted to determine the maximum stress in each anchor design at various stages of insertion. Simulations also were performed to investigate the effect of wall thickness in the anchor head. The maximum axial force required to insert a self-tapping CFR-PEEK suture anchor did not exceed 150 N for any anchor design. The maximum torque required to insert a self-tapping CFR-PEEK suture anchor did not exceed 0.8 Nm. Computational simulations reveal significant stress concentrations in the region of the anchor tip, demonstrating that a re-design of the tip geometry should be performed to avoid fracture during self-tapping, as observed in the experimental component of this study. This study demonstrates the ability of PEEK-OPTIMA Reinforced suture anchors to self-tap polyurethane foam bone analogue. This provides motivation to further investigate the self-tapping ability of CFR-PEEK suture anchors in animal/cadaveric bone. An optimised design for CFR-PEEK suture anchors offers the advantages of radiolucency, and mechanical properties similar to bone with the ability to self-tap. This may have positive implications for reducing surgery times and the associated costs with the procedure. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Precision Timing with shower maximum detectors based on pixelated micro-channel plates
NASA Astrophysics Data System (ADS)
Bornheim, A.; Apresyan, A.; Ronzhin, A.; Xie, S.; Spiropulu, M.; Trevor, J.; Pena, C.; Presutti, F.; Los, S.
2017-11-01
Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. In this report we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beam measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.
Precision Timing with shower maximum detectors based on pixelated micro-channel plates
Bornheim, A.; Apresyan, A.; Ronzhin, A.; ...
2017-11-27
Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. Here, we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We also demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beammore » measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.« less
Precision Timing with shower maximum detectors based on pixelated micro-channel plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornheim, A.; Apresyan, A.; Ronzhin, A.
Future calorimeters and shower maximum detectors at high luminosity colliders need to be highly radiation resistant and very fast. One exciting option for such a detector is a calorimeter composed of a secondary emitter as the active element. Here, we outline the study and development of a secondary emission calorimeter prototype using micro-channel plates (MCP) as the active element, which directly amplify the electromagnetic shower signal. We also demonstrate the feasibility of using a bare MCP within an inexpensive and robust housing without the need for any photo cathode, which is a key requirement for high radiation tolerance. Test beammore » measurements of the prototype were performed with 120 GeV primary protons and secondary beams at the Fermilab Test Beam Facility, demonstrating basic calorimetric measurements and precision timing capabilities. Using multiple pixel readout on the MCP, we demonstrate a transverse spatial resolution of 0.8 mm, and time resolution better than 40 ps for electromagnetic showers.« less
NASA Astrophysics Data System (ADS)
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokharel, S; Rana, S
Purpose: purpose of this study is to evaluate the effect of grid size in Eclipse AcurosXB dose calculation algorithm for SBRT lung. Methods: Five cases of SBRT lung previously treated have been chosen for present study. Four of the plans were 5 fields conventional IMRT and one was Rapid Arc plan. All five cases have been calculated with five grid sizes (1, 1.5, 2, 2.5 and 3mm) available for AXB algorithm with same plan normalization. Dosimetric indices relevant to SBRT along with MUs and time have been recorded for different grid sizes. The maximum difference was calculated as a percentagemore » of mean of all five values. All the plans were IMRT QAed with portal dosimetry. Results: The maximum difference of MUs was within 2%. The time increased was as high as 7 times from highest 3mm to lowest 1mm grid size. The largest difference of PTV minimum, maximum and mean dose were 7.7%, 1.5% and 1.6% respectively. The highest D2-Max difference was 6.1%. The highest difference in ipsilateral lung mean, V5Gy, V10Gy and V20Gy were 2.6%, 2.4%, 1.9% and 3.8% respectively. The maximum difference of heart, cord and esophagus dose were 6.5%, 7.8% and 4.02% respectively. The IMRT Gamma passing rate at 2%/2mm remains within 1.5% with at least 98% points passing with all grid sizes. Conclusion: This work indicates the lowest grid size of 1mm available in AXB is not necessarily required for accurate dose calculation. The IMRT passing rate was insignificant or not observed with the reduction of grid size less than 2mm. Although the maximum percentage difference of some of the dosimetric indices appear large, most of them are clinically insignificant in absolute dose values. So we conclude that 2mm grid size calculation is best compromise in light of dose calculation accuracy and time it takes to calculate dose.« less
Thermal energy storage systems using fluidized bed heat exchangers
NASA Technical Reports Server (NTRS)
Weast, T.; Shannon, L.
1980-01-01
A rotary cement kiln and an electric arc furnace were chosen for evaluation to determine the applicability of a fluid bed heat exchanger (FBHX) for thermal energy storage (TES). Multistage shallow bed FBHX's operating with high temperature differences were identified as the most suitable for TES applications. Analysis of the two selected conceptual systems included establishing a plant process flow configuration, an operational scenario, a preliminary FBHX/TES design, and parametric analysis. A computer model was developed to determine the effects of the number of stages, gas temperatures, gas flows, bed materials, charge and discharge time, and parasitic power required for operation. The maximum national energy conservation potential of the cement plant application with TES is 15.4 million barrels of oil or 3.9 million tons of coal per year. For the electric arc furnance application the maximum national conservation potential with TES is 4.5 million barrels of oil or 1.1 million tons of coal per year. Present time of day utility rates are near the breakeven point required for the TES system. Escalation of on-peak energy due to critical fuel shortages could make the FBHX/TES applications economically attractive in the future.
Thermal energy storage systems using fluidized bed heat exchangers
NASA Astrophysics Data System (ADS)
Weast, T.; Shannon, L.
1980-06-01
A rotary cement kiln and an electric arc furnace were chosen for evaluation to determine the applicability of a fluid bed heat exchanger (FBHX) for thermal energy storage (TES). Multistage shallow bed FBHX's operating with high temperature differences were identified as the most suitable for TES applications. Analysis of the two selected conceptual systems included establishing a plant process flow configuration, an operational scenario, a preliminary FBHX/TES design, and parametric analysis. A computer model was developed to determine the effects of the number of stages, gas temperatures, gas flows, bed materials, charge and discharge time, and parasitic power required for operation. The maximum national energy conservation potential of the cement plant application with TES is 15.4 million barrels of oil or 3.9 million tons of coal per year. For the electric arc furnance application the maximum national conservation potential with TES is 4.5 million barrels of oil or 1.1 million tons of coal per year. Present time of day utility rates are near the breakeven point required for the TES system. Escalation of on-peak energy due to critical fuel shortages could make the FBHX/TES applications economically attractive in the future.
Proximity operations analysis: Retrieval of the solar maximum mission observatory
NASA Technical Reports Server (NTRS)
Yglesias, J. A.
1980-01-01
Retrieval of the solar maximum mission (SMM) observatory is feasible in terms of orbiter primary reaction control system (PRCS) plume disturbance of the SMM, orbiter propellant consumed, and flight time required. Man-in-loop simulations will be required to validate these operational techniques before the verification process is complete. Candidate approach and flyaround techniques were developed that allow the orbiter to attain the proper alinement with the SMM for clear access to the grapple fixture (GF) prior grappling. Because the SMM has very little control authority (approximately 14.8 pound-foot-seconds in two axes and rate-damped in the third) it is necessary to inhibit all +Z (upfiring) PRCS jets on the orbiter to avoid tumbling the SMM. A profile involving a V-bar approach and an out-of-plane flyaround appears to be the best choice and is recommended at this time. The flyaround technique consists of alining the +X-axes of the two vehicles parallel with each other and then flying the orbiter around the SMM until the GF is in view. The out-of-plane flyaround technique is applicable to any inertially stabilized payload, and, the entire final approach profile could be considered as standard for most retrieval missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Moses; Kim, Keonhui; Muljadi, Eduard
This paper proposes a torque limit-based inertial control scheme of a doubly-fed induction generator (DFIG) that supports the frequency control of a power system. If a frequency deviation occurs, the proposed scheme aims to release a large amount of kinetic energy (KE) stored in the rotating masses of a DFIG to raise the frequency nadir (FN). Upon detecting the event, the scheme instantly increases its output to the torque limit and then reduces the output with the rotor speed so that it converges to the stable operating range. To restore the rotor speed while causing a small second frequency dipmore » (SFD), after the rotor speed converges the power reference is reduced by a small amount and maintained until it meets the reference for maximum power point tracking control. The test results demonstrate that the scheme can improve the FN and maximum rate of change of frequency while causing a small SFD in any wind conditions and in a power system that has a high penetration of wind power, and thus the scheme helps maintain the required level of system reliability. The scheme releases the KE from 2.9 times to 3.7 times the Hydro-Quebec requirement depending on the power reference.« less
Holkar, Somnath Kadappa; Chandra, Ram
2016-01-01
Pleurotus spp. is one of the most important edible mushrooms cultivated in India. The present study was an attempt to compare five Pleurotus species in context of actual time required for each growth stage viz., spawn run period, number of days required for initiation of pin heads of sporophores, average weight of fruiting bodies in all the flushes and total yield. The spawn run period in all the five species were recorded between 18 days-21 days, similarly for initiation of pinheads 5 days -7 days were required after spawn run period. A total of 24 days to 27 days, 34 days to 37 days and 47 days to 53 days were required for harvesting the I, II and III flushes respectively. An average number of 41 to 70 sporophores per bag containing 1 kg of dry substrates were obtained from all the Pleurotus species. Maximum 14 g weight of single sporophore was recorded from P. florida, similarly, an average maximum diameter of 5.3 cm of sporophores of P. florida was observed whereas the diameter of sporophores in rest of the species ranged from 3.0 cm to 3.2 cm. The number of sporophores were obtained from P. sajor-caju (n-70) and all the species showed significant difference with respect to the number of sporophores in a bunch at probability level of P = 0.05. Maximum weight of single bunch was recorded (58 g) in P. florida and total yield of 740 gkg(-1) of dry matter was recorded in P. florida.
Maximum entropy analysis of polarized fluorescence decay of (E)GFP in aqueous solution
NASA Astrophysics Data System (ADS)
Novikov, Eugene G.; Skakun, Victor V.; Borst, Jan Willem; Visser, Antonie J. W. G.
2018-01-01
The maximum entropy method (MEM) was used for the analysis of polarized fluorescence decays of enhanced green fluorescent protein (EGFP) in buffered water/glycerol mixtures, obtained with time-correlated single-photon counting (Visser et al 2016 Methods Appl. Fluoresc. 4 035002). To this end, we used a general-purpose software module of MEM that was earlier developed to analyze (complex) laser photolysis kinetics of ligand rebinding reactions in oxygen binding proteins. We demonstrate that the MEM software provides reliable results and is easy to use for the analysis of both total fluorescence decay and fluorescence anisotropy decay of aqueous solutions of EGFP. The rotational correlation times of EGFP in water/glycerol mixtures, obtained by MEM as maxima of the correlation-time distributions, are identical to the single correlation times determined by global analysis of parallel and perpendicular polarized decay components. The MEM software is also able to determine homo-FRET in another dimeric GFP, for which the transfer correlation time is an order of magnitude shorter than the rotational correlation time. One important advantage utilizing MEM analysis is that no initial guesses of parameters are required, since MEM is able to select the least correlated solution from the feasible set of solutions.
Long-range wind monitoring in real time with optimized coherent lidar
NASA Astrophysics Data System (ADS)
Dolfi-Bouteyre, Agnes; Canat, Guillaume; Lombard, Laurent; Valla, Matthieu; Durécu, Anne; Besson, Claudine
2017-03-01
Two important enabling technologies for pulsed coherent detection wind lidar are the laser and real-time signal processing. In particular, fiber laser is limited in peak power by nonlinear effects, such as stimulated Brillouin scattering (SBS). We report on various technologies that have been developed to mitigate SBS and increase peak power in 1.5-μm fiber lasers, such as special large mode area fiber designs or strain management. Range-resolved wind profiles up to a record range of 16 km within 0.1-s averaging time have been obtained thanks to those high-peak power fiber lasers. At long range, the lidar signal gets much weaker than the noise and special care is required to extract the Doppler peak from the spectral noise. To optimize real-time processing for weak carrier-to-noise ratio signal, we have studied various Doppler mean frequency estimators (MFE) and the influence of data accumulation on outliers occurrence. Five real-time MFEs (maximum, centroid, matched filter, maximum likelihood, and polynomial fit) have been compared in terms of error and processing time using lidar experimental data. MFE errors and data accumulation limits are established using a spectral method.
THERMAL EVALUATION OF CONTAMINATED LIQUID ONTO CELL FLOORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
NOEMAIL), J
2009-05-04
For the Salt Disposition Integration Project (SDIP), postulated events in the new Salt Waste Processing Facility (SWPF) can result in spilling liquids that contain Cs-137 and organics onto cell floors. The parameters of concern are the maximum temperature of the fluid following a spill and the time required for the maximum fluid temperature to be reached. Control volume models of the various process cells have been developed using standard conduction and natural convection relationships. The calculations are performed using the Mathcad modeling software. The results are being used in Consolidated Hazards Analysis Planning (CHAP) to determine the controls that maymore » be needed to mitigate the potential impact of liquids containing Cs-137 and flammable organics that spill onto cell floors. Model development techniques and the ease of making model changes within the Mathcad environment are discussed. The results indicate that certain fluid spills result in overheating of the fluid, but the times to reach steady-state are several hundred hours. The long times allow time for spill clean up without the use of expensive mitigation controls.« less
Yang, Lili; Suzuki, Eduardo Yugo; Suzuki, Boonsiva
2014-01-01
The purpose of this study was to compare the distraction forces and the biomechanical effects between two different intraoperative surgical procedures (down-fracture [DF] and non-DF [NDF]) for maxillary distraction osteogenesis. Eight patients were assigned into two groups according to the surgical procedure: DF, n = 6 versus NDF, n = 2. Lateral cephalograms taken preoperatively (T1), immediately after removal of the distraction device (T2), and after at least a 6 months follow-up period (T3) were analyzed. Assessment of distraction forces was performed during the distraction period. The Mann-Whitney U-test was used to compare the difference in the amount of advancement, the maximum distraction force and the amount of relapse. Although a significantly greater amount of maxillary movement was observed in the DF group (median 9.5 mm; minimum-maximum 7.9-14.1 mm) than in the NDF group (median 5.9 mm; minimum-maximum 4.4-7.6 mm), significantly lower maximum distraction forces were observed in the DF (median 16.4 N; minimum-maximum 15.1-24.6 N) than in the NDF (median 32.9 N; minimum-maximum 27.6-38.2 N) group. A significantly greater amount of dental anchorage loss was observed in the NDF group. Moreover, the amount of relapse observed in the NDF group was approximately 3.5 times greater than in the DF group. In this study, it seemed that, the use of the NDF procedure resulted in lower levels of maxillary mobility at the time of the maxillary distraction, consequently requiring greater amounts of force to advance the maxillary bone. Moreover, it also resulted in a reduced amount of maxillary movement, a greater amount of dental anchorage loss and poor treatment stability.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
NASA Technical Reports Server (NTRS)
Sadovsky, A. V.; Davis, D.; Isaacson, D. R.
2012-01-01
We address the problem of navigating a set of moving agents, e.g. automated guided vehicles, through a transportation network so as to bring each agent to its destination at a specified time. Each pair of agents is required to be separated by a minimal distance, generally agent-dependent, at all times. The speed range, initial position, required destination, and required time of arrival at destination for each agent are assumed provided. The movement of each agent is governed by a controlled differential equation (state equation). The problem consists in choosing for each agent a path and a control strategy so as to meet the constraints and reach the destination at the required time. This problem arises in various fields of transportation, including Air Traffic Management and train coordination, and in robotics. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver.
Potential Operating Orbits for Fission Electric Propulsion Systems Driven by the SAFE-400
NASA Technical Reports Server (NTRS)
Houts, Mike; Kos, Larry; Poston, David; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Safety must be ensured during all phases of space fission system design, development, fabrication, launch, operation, and shutdown. One potential space fission system application is fission electric propulsion (FEP), in which fission energy is converted into electricity and used to power high efficiency (Isp greater than 3000s) electric thrusters. For these types of systems it is important to determine which operational scenarios ensure safety while allowing maximum mission performance and flexibility. Space fission systems are essentially nonradioactive at launch, prior to extended operation at high power. Once high power operation begins, system radiological inventory steadily increases as fission products build up. For a given fission product isotope, the maximum radiological inventory is typically achieved once the system has operated for a length of time equivalent to several half-lives. After that time, the isotope decays at the same rate it is produced, and no further inventory builds in. For an FEP mission beginning in Earth orbit, altitude and orbital lifetime increase as the propulsion system operates. Two simultaneous effects of fission propulsion system operation are thus (1) increasing fission product inventory and (2) increasing orbital lifetime. Phrased differently, as fission products build up, more time is required for the fission products to naturally convert back into non-radioactive isotopes. Simultaneously, as fission products build up, orbital lifetime increases, providing more time for the fission products to naturally convert back into non-radioactive isotopes. Operational constraints required to ensure safety can thus be quantified.
Potential operating orbits for fission electric propulsion systems driven by the SAFE-400
NASA Astrophysics Data System (ADS)
Houts, Mike; Kos, Larry; Poston, David
2002-01-01
Safety must be ensured during all phases of space fission system design, development, fabrication, launch, operation, and shutdown. One potential space fission system application is fission electric propulsion (FEP), in which fission energy is converted into electricity and used to power high efficiency (Isp>3000s) electric thrusters. For these types of systems it is important to determine which operational scenarios ensure safety while allowing maximum mission performance and flexibility. Space fission systems are essentially non-radioactive at launch, prior to extended operation at high power. Once high power operation begins, system radiological inventory steadily increases as fission products build up. For a given fission product isotope, the maximum radiological inventory is typically achieved once the system has operated for a length of time equivalent to several half-lives. After that time, the isotope decays at the same rate it is produced, and no further inventory builds in. For an FEP mission beginning in Earth orbit, altitude and orbital lifetime increase as the propulsion system operates. Two simultaneous effects of fission propulsion system operation are thus (1) increasing fission product inventory and (2) increasing orbital lifetime. Phrased differently, as fission products build up, more time is required for the fission products to naturally convert back into non-radioactive isotopes. Simultaneously, as fission products build up, orbital lifetime increases, providing more time for the fission products to naturally convert back into non-radioactive isotopes. Operational constraints required to ensure safety can thus be quantified. .
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Thromboelastography Values in Hispaniolan Amazon Parrots ( Amazona ventralis ): A Pilot Study.
Keller, Krista A; Sanchez-Migallon Guzman, David; Acierno, Mark J; Beaufrère, Hugues; Sinclair, Kristin M; Owens, Sean D; Paul-Murphy, Joanne; Tully, Thomas N
2015-09-01
Thromboelastography (TEG) provides a global assessment of coagulation, including the rate of clot initiation, clot kinetics, achievement of maximum clot strength, and fibrinolysis. Thromboelastography (TEG) is used with increasing frequency in the field of veterinary medicine, although its usefulness in avian species has not been adequately explored. The purpose of this preliminary study was to assess the applicability of TEG in psittacine birds. Kaolin-activated TEG was used to analyze citrated whole blood collected routinely from 8 healthy adult Hispaniolan Amazon parrots ( Amazona ventralis ). The minimum and maximum TEG values obtained included time to clot initiation (2.6-15 minutes), clot formation time (4.3-20.8 minutes), α angle (12.7°-47.9°), maximum amplitude of clot strength (26.3-46.2 mm), and percentage of lysis 30 minutes after achievement of maximum amplitude (0%-5.3%). The TEG values demonstrated comparative hypocoagulability relative to published values in canine and feline species. Differences may be explained by either the in vitro temperature at which TEG is standardly performed or the method of activation used in this study. Although TEG may have significant advantages over traditional coagulation tests, including lack of need for species-specific reagents, further evaluation is required in a variety of avian species and while exploring various TEG methodologies before this technology can be recommended for use in clinical cases.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.
1989-08-01
The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Validation of an "Intelligent Mouthguard" Single Event Head Impact Dosimeter.
Bartsch, Adam; Samorezov, Sergey; Benzel, Edward; Miele, Vincent; Brett, Daniel
2014-11-01
Dating to Colonel John Paul Stapp MD in 1975, scientists have desired to measure live human head impacts with accuracy and precision. But no instrument exists to accurately and precisely quantify single head impact events. Our goal is to develop a practical single event head impact dosimeter known as "Intelligent Mouthguard" and quantify its performance on the benchtop, in vitro and in vivo. In the Intelligent Mouthguard hardware, limited gyroscope bandwidth requires an algorithm-based correction as a function of impact duration. After we apply gyroscope correction algorithm, Intelligent Mouthguard results at time of CG linear acceleration peak correlate to the Reference Hybrid III within our tested range of pulse durations and impact acceleration profiles in American football and Boxing in vitro tests: American football, IMG=1.00REF-1.1g, R2=0.99; maximum time of peak XYZ component imprecision 3.6g and 370 rad/s2; maximum time of peak azimuth and elevation imprecision 4.8° and 2.9°; maximum average XYZ component temporal imprecision 3.3g and 390 rad/s2. Boxing, IMG=1.00REF-0.9 g, R2=0.99, R2=0.98; maximum time of peak XYZ component imprecision 3.9 g and 390 rad/s2, maximum time of peak azimuth and elevation imprecision 2.9° and 2.1°; average XYZ component temporal imprecision 4.0 g and 440 rad/s2. In vivo Intelligent Mouthguard true positive head impacts from American football players and amateur boxers have temporal characteristics (first harmonic frequency from 35 Hz to 79 Hz) within our tested benchtop (first harmonic frequency<180 Hz) and in vitro (first harmonic frequency<100 Hz) ranges. Our conclusions apply only to situations where the rigid body assumption is valid, sensor-skull coupling is maintained and the ranges of tested parameters and harmonics fall within the boundaries of harmonics validated in vitro. For these situations, Intelligent Mouthguard qualifies as a single event dosimeter in American football and Boxing.
Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk.
Thomson, David; Boylan, Chris; Liptrot, Tom; Aitkenhead, Adam; Lee, Lip; Yap, Beng; Sykes, Andrew; Rowbottom, Carl; Slevin, Nicholas
2014-08-03
The accurate definition of organs at risk (OARs) is required to fully exploit the benefits of intensity-modulated radiotherapy (IMRT) for head and neck cancer. However, manual delineation is time-consuming and there is considerable inter-observer variability. This is pertinent as function-sparing and adaptive IMRT have increased the number and frequency of delineation of OARs. We evaluated accuracy and potential time-saving of Smart Probabilistic Image Contouring Engine (SPICE) automatic segmentation to define OARs for salivary-, swallowing- and cochlea-sparing IMRT. Five clinicians recorded the time to delineate five organs at risk (parotid glands, submandibular glands, larynx, pharyngeal constrictor muscles and cochleae) for each of 10 CT scans. SPICE was then used to define these structures. The acceptability of SPICE contours was initially determined by visual inspection and the total time to modify them recorded per scan. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm created a reference standard from all clinician contours. Clinician, SPICE and modified contours were compared against STAPLE by the Dice similarity coefficient (DSC) and mean/maximum distance to agreement (DTA). For all investigated structures, SPICE contours were less accurate than manual contours. However, for parotid/submandibular glands they were acceptable (median DSC: 0.79/0.80; mean, maximum DTA: 1.5 mm, 14.8 mm/0.6 mm, 5.7 mm). Modified SPICE contours were also less accurate than manual contours. The utilisation of SPICE did not result in time-saving/improve efficiency. Improvements in accuracy of automatic segmentation for head and neck OARs would be worthwhile and are required before its routine clinical implementation.
[Development of Micro-Spectrometer with a Function of Timely Temperature Compensation].
Bao, Jian-guang; Liu, Zheng-kun; Chen, Huo-yao; Lin, Ji-ping; Fu, Shao-jun
2015-05-01
Temperature drift will be brought to Micro-Spectrometer used for demodulating the Varied Line-Space(VLS) grating position sensor on aircraft due to high-low temperature shock. We successfully made a Micro-Spectrometer, for the VLS grating position sensor on aircraft, which still have stable output under temperature shock enviro nment. In order to present a real time temperature compensation scheme, the effects temperature change has on Micro-Spectrometer are analyzed and the traditional cross Czerny-Turner (C-T)optical structure is optimized. Both optical structures are analyzed by optics design software ZEMAX and proved that comparedwithtraditional cross C-T optical structure, the newone can accomplish not only smaller spectrum drift but also spectrum drift with better linearity. Based on the new optical structure. The scheme of using reference wavelength to accomplish real time temperature compensation was proposed and a Micro-fiber Spectrometer was successfully manufactured, whith is with Volume of 80 mm X 70 mmX 70 mm, integration time of 8 ~1 000 ms and FullWidthHalfMaximum(FWHM) of 2 nm. Experiments show that the new spectrometer meets the design requirement. Under high temperature in the range of nearly 60 °C, the standard error of wavelength of this new spectrometer is smaller than 0. 1 nm, and the maximum error of wavelength is 0. 14 nm, which is much smaller than required 0. 3 nm. Innovations of this paper are the schemeof real time temperature compensation, the new cross C-T optical structure and a Micro-fiber Spectrometer based on it.
NASA Astrophysics Data System (ADS)
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Maximum demand charge rates for commercial and industrial electricity tariffs in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaren, Joyce; Gagnon, Pieter; Zimny-Schmitt, Daniel
NREL has assembled a list of U.S. retail electricity tariffs and their associated demand charge rates for the Commercial and Industrial sectors. The data was obtained from the Utility Rate Database. Keep the following information in mind when interpreting the data: (1) These data were interpreted and transcribed manually from utility tariff sheets, which are often complex. It is a certainty that these data contain errors, and therefore should only be used as a reference. Actual utility tariff sheets should be consulted if an action requires this type of data. (2) These data only contains tariffs that were entered intomore » the Utility Rate Database. Since not all tariffs are designed in a format that can be entered into the Database, this list is incomplete - it does not contain all tariffs in the United States. (3) These data may have changed since this list was developed (4) Many of the underlying tariffs have additional restrictions or requirements that are not represented here. For example, they may only be available to the agricultural sector or closed to new customers. (5) If there are multiple demand charge elements in a given tariff, the maximum demand charge is the sum of each of the elements at any point in time. Where tiers were present, the highest rate tier was assumed. The value is a maximum for the year, and may be significantly different from demand charge rates at other times in the year. Utility Rate Database: https://openei.org/wiki/Utility_Rate_Database« less
Geology and hydrology of the Elk River, Minnesota, nuclear-reactor site
Norvitch, Ralph F.; Schneider, Robert; Godfrey, Richard G.
1963-01-01
The Elk River, Minn., nuclear-reactor site is on the east bluff of the Mississippi River about 35 miles northwest of Minneapolis and St. Paul. The area is underlain by about 70 to 180 feet of glacial drift, including at the top as much as 120 feet of outwash deposits (valley train) of the glacial Mississippi River. The underlying Cambrian bedrock consists of marine sedimentary formations including artesian sandstone aquifers. A hypothetically spilled liquid at the reactor site could follow one or both of two courses, thus: (1) It could flow over the land surface and through an artificial drainage system to the river in a matter of minutes; (2) part or nearly all of it could seep downward to the water table and then move laterally to the river. The time required might range from a few weeks to a year, or perhaps more. The St. Paul and Minneapolis water-supply intakes, 21 and 25 miles downstream, respectively, are the most critical points to be considered in the event of an accidental spill. Based on streamflow and velocity data for the Mississippi River near Anoka, the time required for the maximum concentration of a contaminant to travel from the reactor site to the St. Paul intake was computed to be about 8 hours, at the median annual maximum daily discharge. For this discharge, the maximum concentration at the intake would be about 0.0026 microcurie per cubic foot for the release of 1 curie of activity into the river near the reactor site.
ERIC Educational Resources Information Center
Nemarich, Samuel P.; Velleman, Ruth A.
Designed to suggest solutions to problems of curricula and instructional techniques for physically disabled children, the text considers the nature of the child and discusses these aspects of curriculum and methods: definitions and objectives; teachers and administrators; time requirements and enrichment; grouping; reading instruction; testing,…
29 CFR 4.172 - Meeting requirements for particular fringe benefits-in general.
Code of Federal Regulations, 2010 CFR
2010-07-01
... working on that contract up to a maximum of 40 hours per week and 2,080 (i.e., 52 weeks of 40 hours each) per year, as these are the typical number of nonovertime hours of work in a week, and in a year... benefit in a stated amount per hour, a contractor employing employees part of the time on contract work...
Code of Federal Regulations, 2010 CFR
2010-10-01
... explain the effect of the law in commonly-encountered situations. The Act governs the maximum work hours... transportation are viewed as personal commuting and, thus, off-duty time. A release period is considered off-duty... offenses, ability to pay, effect on ability to continue to do business and such other matters as justice...
Code of Federal Regulations, 2011 CFR
2011-10-01
... explain the effect of the law in commonly-encountered situations. The Act governs the maximum work hours... transportation are viewed as personal commuting and, thus, off-duty time. A release period is considered off-duty... offenses, ability to pay, effect on ability to continue to do business and such other matters as justice...
NASA Astrophysics Data System (ADS)
Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.
2017-07-01
Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.
Assessment of Heat Hazard during the Polymerization of Selected Light-Sensitive Dental Materials.
Janeczek, Maciej; Herman, Katarzyna; Fita, Katarzyna; Dudek, Krzysztof; Kowalczyk-Zając, Małgorzata; Czajczyńska-Waszkiewicz, Agnieszka; Piesiak-Pańczyszyn, Dagmara; Kosior, Piotr; Dobrzyński, Maciej
2016-01-01
Introduction. Polymerization of light-cured dental materials used for restoration of hard tooth tissue may lead to an increase in temperature that may have negative consequence for pulp vitality. Aim. The aim of this study was to determine maximum temperatures reached during the polymerization of selected dental materials, as well as the time that is needed for samples of sizes similar to those used in clinical practice to reach these temperatures. Materials and Methods. The study involved four composite restorative materials, one lining material and a dentine bonding agent. The polymerization was conducted with the use of a diode light-curing unit. The measurements of the external surface temperature of the samples were carried out using the Thermovision®550 thermal camera. Results. The examined materials significantly differed in terms of the maximum temperatures values they reached, as well as the time required for reaching the temperatures. A statistically significant positive correlation of the maximum temperature and the sample weight was observed. Conclusions. In clinical practice, it is crucial to bear in mind the risk of thermal damage involved in the application of light-cured materials. It can be reduced by using thin increments of composite materials.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Wear of Spur Gears Having a Dithering Motion and Lubricated with a Perfluorinated Polyether Grease
NASA Technical Reports Server (NTRS)
Krantz, Timothy; Oswald, Fred; Handschuh, Robert
2007-01-01
Gear contact surface wear is one of the important failure modes for gear systems. Dedicated experiments are required to enable precise evaluations of gear wear for a particular application. The application of interest for this study required evaluation of wear of gears lubricated with a grade 2 perfluorinated polyether grease and having a dithering (rotation reversal) motion. Experiments were conducted using spur gears made from AISI 9310 steel. Wear was measured using a profilometer at test intervals encompassing 10,000 to 80,000 cycles of dithering motion. The test load level was 1.1 GPa maximum Hertz contact stress at the pitch-line. The trend of total wear as a function of test cycles was linear, and the wear depth rate was approximately 1.2 nm maximum wear depth per gear dithering cycle. The observed wear rate was about 600 times greater than the wear rate for the same gears operated at high speed and lubricated with oil.
Resource utilization model for the algorithm to architecture mapping model
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Patel, Rakesh R.
1993-01-01
The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.
Maslia, Morris L.; Aral, Mustafa M.; Ruckart, Perri Z.; Bove, Frank J.
2017-01-01
A U.S. government health agency conducted epidemiological studies to evaluate whether exposures to drinking water contaminated with volatile organic compounds (VOC) at U.S. Marine Corps Base Camp Lejeune, North Carolina, were associated with increased health risks to children and adults. These health studies required knowledge of contaminant concentrations in drinking water—at monthly intervals—delivered to family housing, barracks, and other facilities within the study area. Because concentration data were limited or unavailable during much of the period of contamination (1950s–1985), the historical reconstruction process was used to quantify estimates of monthly mean contaminant-specific concentrations. This paper integrates many efforts, reports, and papers into a synthesis of the overall approach to, and results from, a drinking-water historical reconstruction study. Results show that at the Tarawa Terrace water treatment plant (WTP) reconstructed (simulated) tetrachloroethylene (PCE) concentrations reached a maximum monthly average value of 183 micrograms per liter (μg/L) compared to a one-time maximum measured value of 215 μg/L and exceeded the U.S. Environmental Protection Agency’s current maximum contaminant level (MCL) of 5 μg/L during the period November 1957–February 1987. At the Hadnot Point WTP, reconstructed trichloroethylene (TCE) concentrations reached a maximum monthly average value of 783 μg/L compared to a one-time maximum measured value of 1400 μg/L during the period August 1953–December 1984. The Hadnot Point WTP also provided contaminated drinking water to the Holcomb Boulevard housing area continuously prior to June 1972, when the Holcomb Boulevard WTP came on line (maximum reconstructed TCE concentration of 32 μg/L) and intermittently during the period June 1972–February 1985 (maximum reconstructed TCE concentration of 66 μg/L). Applying the historical reconstruction process to quantify contaminant-specific monthly drinking-water concentrations is advantageous for epidemiological studies when compared to using the classical exposed versus unexposed approach. PMID:28868161
Hydes, Theresa; Hansi, Navjyot; Trebble, Timothy M
2012-01-01
Upper gastrointestinal (UGI) endoscopy is a routine healthcare procedure with a defined patient pathway. The objective of this study was to redesign this pathway for unsedated patients using lean thinking transformation to focus on patient-derived value-adding steps, remove waste and create a more efficient process. This was to form the basis of a pathway template that was transferrable to other endoscopy units. A literature search of patient expectations for UGI endoscopy identified patient-derived value. A value stream map was created of the current pathway. The minimum and maximum time per step, bottlenecks and staff-staff interactions were recorded. This information was used for service transformation using lean thinking. A patient pathway template was created and implemented into a secondary unit. Questionnaire studies were performed to assess patient satisfaction. In the primary unit the patient pathway reduced from 19 to 11 steps with a reduction in the maximum lead time from 375 to 80 min following lean thinking transformation. The minimum value/lead time ratio increased from 24% to 49%. The patient pathway was redesigned as a 'cellular' system with minimised patient and staff travelling distances, waiting times, paperwork and handoffs. Nursing staff requirements reduced by 25%. Patient-prioritised aspects of care were emphasised with increased patient-endoscopist interaction time. The template was successfully introduced into a second unit with an overall positive patient satisfaction rating of 95%. Lean thinking transformation of the unsedated UGI endoscopy pathway results in reduced waiting times, reduced staffing requirements and improved patient flow and can form the basis of a pathway template which may be successfully transferred into alternative endoscopy environments with high levels of patient satisfaction.
A versatile pulse programmer for pulsed nuclear magnetic resonance spectroscopy.
NASA Technical Reports Server (NTRS)
Tarr, C. E.; Nickerson, M. A.
1972-01-01
A digital pulse programmer producing the standard pulse sequences required for pulsed nuclear magnetic resonance spectroscopy is described. In addition, a 'saturation burst' sequence, useful in the measurement of long relaxation times in solids, is provided. Both positive and negative 4 V trigger pulses are produced that are fully synchronous with a crystal-controlled time base, and the pulse programmer may be phase-locked with a maximum pulse jitter of 3 ns to the oscillator of a coherent pulse spectrometer. Medium speed TTL integrated circuits are used throughout.
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Hargrove, William; Gasser, Jerry; Smoot, James; Kuper, Philip D.
2014-01-01
This presentation discusses MODIS NDVI change detection methods and products used in the ForWarn Early Warning System (EWS) for near real time (NRT) recognition and tracking of regionally evident forest disturbances throughout the conterminous US (CONUS). The latter has provided NRT forest change products to the forest health protection community since 2010, using temporally processed MODIS Aqua and Terra NDVI time series data to currently compute and post 6 different forest change products for CONUS every 8 days. Multiple change products are required to improve detectability and to more fully assess the nature of apparent disturbances. Each type of forest change product reports per pixel percent change in NDVI for a given 24 day interval, comparing current versus a given historical baseline NDVI. EMODIS 7 day expedited MODIS MOD13 data are used to obtain current and historical NDVIs, respectively. Historical NDVI data is processed with Time Series Product Tool (TSPT); and 2) the Phenological Parameters Estimation Tool (PPET) software. While each change products employ maximum value compositing (MVC) of NDVI, the design of specific products primarily differs in terms of the historical baseline. The three main change products use either 1, 3, or all previous years of MVC NDVI as a baseline. Another product uses an Adaptive Length Compositing (ALC) version of MVC to derive an alternative current NDVI that is the freshest quality NDVI as opposed to merely the MVC NDVI across a 24 day time frame. The ALC approach can improve detection speed by 8 to 16 days. ForWarn also includes 2 change products that improve detectability of forest disturbances in lieu of climatic fluctuations, especially in the spring and fall. One compares current MVC NDVI to the zonal maximum under the curve NDVI per pheno-region cluster class, considering all previous years in the MODIS record. The other compares current maximum NDVI to the mean of maximum NDVI for all previous MODIS years.
Design and evaluation of a THz time domain imaging system using standard optical design software.
Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas
2008-09-20
A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Maximum) Required for use in all highway diesel vehicles and engines. Recommended for use in all diesel vehicles and engines. (b) From June 1, 2010, through September 30, 2012, for pumps dispensing NR diesel... ppm Sulfur Maximum) Required for use in all model year 2011 and later nonroad diesel engines...
An Analysis of Ablation-Shield Requirements for Manned Reentry Vehicles
NASA Technical Reports Server (NTRS)
Roberts, Leonard
1960-01-01
The problem of sublimation of material and accumulation of heat in an ablation shield is analyzed and the results are applied to the reentry of manned vehicles into the earth's atmosphere. The parameters which control the amount of sublimation and the temperature distribution within the ablation shield are determined and presented in a manner useful for engineering calculation. It is shown that the total mass loss from the shield during reentry and the insulation requirements may be given very simply in terms of the maximum deceleration of the vehicle or the total reentry time.
Synchronization for Optical PPM with Inter-Symbol Guard Times
NASA Astrophysics Data System (ADS)
Rogalin, R.; Srinivasan, M.
2017-05-01
Deep space optical communications promises orders of magnitude growth in communication capacity, supporting high data rate applications such as video streaming and high-bandwidth science instruments. Pulse position modulation is the modulation format of choice for deep space applications, and by inserting inter-symbol guard times between the symbols, the signal carries the timing information needed by the demodulator. Accurately extracting this timing information is crucial to demodulating and decoding this signal. In this article, we propose a number of timing and frequency estimation schemes for this modulation format, and in particular highlight a low complexity maximum likelihood timing estimator that significantly outperforms the prior art in this domain. This method does not require an explicit synchronization sequence, freeing up channel resources for data transmission.
41 CFR 301-31.10 - How will my agency pay my subsistence expenses?
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable to the locality .75 times the maximum lodging amount applicable to the locality .5 times the maximum lodging amount applicable to the locality. Payment for lodging, meals, and other per diem expenses The maximum per diem rate applicable to the locality .75 times the maximum per diem rate applicable to...
26 CFR 1.410(a)-4 - Maximum age conditions and time of participation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Maximum age conditions and time of participation.... § 1.410(a)-4 Maximum age conditions and time of participation. (a) Maximum age conditions—(1) General...) if the plan excludes from participation (on the basis of age) an employee who has attained an age...
26 CFR 1.410(a)-4 - Maximum age conditions and time of participation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Maximum age conditions and time of participation.... § 1.410(a)-4 Maximum age conditions and time of participation. (a) Maximum age conditions—(1) General...) if the plan excludes from participation (on the basis of age) an employee who has attained an age...
26 CFR 1.410(a)-4 - Maximum age conditions and time of participation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Maximum age conditions and time of participation... Maximum age conditions and time of participation. (a) Maximum age conditions—(1) General rule. A plan is... excludes from participation (on the basis of age) an employee who has attained an age specified by the plan...
Thermal Design to Meet Stringent Temperature Gradient/Stability Requirements of SWIFT BAT Detectors
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2000-01-01
The Burst Alert Telescope (BAT) is an instrument on the National Aeronautics and Space Administration (NASA) SWIFT spacecraft. It is designed to detect gamma ray burst over a broad region of the sky and quickly align the telescopes on the spacecraft to the gamma ray source. The thermal requirements for the BAT detector arrays are very stringent. The maximum allowable temperature gradient of the 256 cadmium zinc telluride (CZT) detectors is PC. Also, the maximum allowable rate of temperature change of the ASICs of the 256 Detector Modules (DMs) is PC on any time scale. The total power dissipation of the DMs and Block Command & Data Handling (BCDH) is 180 W. This paper presents a thermal design that uses constant conductance heat pipes (CCHPs) to minimize the temperature gradient of the DMs, and loop heat pipes (LHPs) to transport the waste heat to the radiator. The LHPs vary the effective thermal conductance from the DMs to the radiator to minimize heater power to meet the heater power budget, and to improve the temperature stability. The DMs are cold biased, and active heater control is used to meet the temperature gradient and stability requirements.
The memoranda clarify existing EPA regulatory requirements for, and provide guidance on, establishing wasteload allocations (WLAs) for storm water discharges in total maximum daily loads (TMDLs) approved or established by EPA.
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.
7 CFR 3565.210 - Maximum interest rate.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
...-&%] = phosphorus content Mn [wt-%] = manganese content Ni [wt-%] = nickel content Cu [wt-%] = copper content A = 1... Linde 80 welds maximum Cu e = 0.301 for all other materials g(Cu e ,Ni,[phis]t e ) = 0.5 + (0.5 x tanh {[log 10 ([phis]t e ) + (1.1390 x Cu e )- (0.448 x Ni)-18.120]/0.629{time} Equation 8: Residual (r...
Use of the Brainlab Disposable Stylet for endoscope and peel-away navigation.
Halliday, Jane; Kamaly, Ian
2016-12-01
Neuronavigation, the ability to perform real-time intra-operative guidance during cranial and/or spinal surgery, has increased both accuracy and safety in neurosurgery [2]. Cranial navigation of existing surgical instruments using Brainlab requires the use of an instrument adapter and clamp, which in our experience renders an endoscope 'top-heavy', difficult to manipulate, and the process of registration of the adapter quite time-consuming. A Brainlab Disposable Stylet was used to navigate fenestration of an entrapped temporal horn in a pediatric case. Accuracy was determined by target visualization relative to neuronavigation targeting. Accuracy was also calculated using basic trigonometry to establish the maximum tool tip inaccuracy for the disposible stylet inserted into a peel-away (Codman) and endoscope. The Brainlab Disposable Stylet was easier to use, more versatile, and as accurate as use of an instrument adapter and clamp. The maximum tool-tip inaccuracy for the endoscope was 0.967 mm, and the Codman peel-away 0.489 mm. A literature review did not reveal any reports of use of the Brainlab Disposable Stylet in this way, and we are unaware of this being used in common neurosurgical practice. We would recommend this technique in endoscopic cases that require use of Brainlab navigation.
Survivable architectures for time and wavelength division multiplexed passive optical networks
NASA Astrophysics Data System (ADS)
Wong, Elaine
2014-08-01
The increased network reach and customer base of next-generation time and wavelength division multiplexed PON (TWDM-PONs) have necessitated rapid fault detection and subsequent restoration of services to its users. However, direct application of existing solutions for conventional PONs to TWDM-PONs is unsuitable as these schemes rely on the loss of signal (LOS) of upstream transmissions to trigger protection switching. As TWDM-PONs are required to potentially use sleep/doze mode optical network units (ONU), the loss of upstream transmission from a sleeping or dozing ONU could erroneously trigger protection switching. Further, TWDM-PONs require its monitoring modules for fiber/device fault detection to be more sensitive than those typically deployed in conventional PONs. To address the above issues, three survivable architectures that are compliant with TWDM-PON specifications are presented in this work. These architectures combine rapid detection and protection switching against multipoint failure, and most importantly do not rely on upstream transmissions for LOS activation. Survivability analyses as well as evaluations of the additional costs incurred to achieve survivability are performed and compared to the unprotected TWDM-PON. Network parameters that impact the maximum achievable network reach, maximum split ratio, connection availability, fault impact, and the incremental reliability costs for each proposed survivable architecture are highlighted.
Parametric Imaging Of Digital Subtraction Angiography Studies For Renal Transplant Evaluation
NASA Astrophysics Data System (ADS)
Gallagher, Joe H.; Meaney, Thomas F.; Flechner, Stuart M.; Novick, Andrew C.; Buonocore, Edward
1981-11-01
A noninvasive method for diagnosing acute tubular necrosis and rejection would be an important tool for the management of renal transplant patients. From a sequence of digital subtraction angiographic images acquired after an intravenous injection of radiographic contrast material, the parametric images of the maximum contrast, the time when the maximum contrast is reached, and two times the time at which one half of the maximum contrast is reached are computed. The parametric images of the time when the maximum is reached clearly distinguish normal from abnormal renal function. However, it is the parametric image of two times the time when one half of the maximum is reached which provides some assistance in differentiating acute tubular necrosis from rejection.
A room-temperature phase transition in maximum microcline - Heat capacity measurements
Openshaw, R.E.; Hemingway, B.S.; Robie, R.A.; Krupka, K.M.
1979-01-01
The thermal hysteresis in heat capacity measurements recently reported (Openshaw et al., 1976) for a maximum microcline prepared from Amelia albite by fused-salt ion-exchange is described in detail. The hysteresis is characterized by two limiting and reproducible curves which differ by 1% of the measured heat capacities. The lower curve, denoted curve B, represents the values obtained before the sample had been cooled below 300 K. Measurements made immediately after cooling the sample below 250 K followed a second parallel curve, curve A, to at least 370 K. Values intermediate to the two limiting curves were also obtained. The transitions from the B to the A curve were rapid and observed to occur three times. The time required to complete the transition from the A to the B curve increased from 39 h to 102 h in the two times it was observed to occur. The hysteresis is interpreted as evidence of a phase change in microcline at 300??10 K The heat effect associated with the phase change has not been evaluated. ?? 1979 Springer-Verlag.
NASA Technical Reports Server (NTRS)
Turriziani, R. V.; Lovell, W. A.; Price, J. E.; Quartero, C. B.; Washburn, S. F.
1979-01-01
Two aircraft were evaluated, using a derated TF34-GE-100 turbofan engine one with laminar flow control (LFC) and one without. The mission of the remotely piloted vehicles (RPV) is one of high-altitude loiter at maximum endurance. With the LFC system maximum mission time increased by 6.7 percent, L/D in the loiter phase improved 14.2 percent, and the minimum parasite drag of the wing was reduced by 65 percent resulting in a 37 percent reduction for the total airplane. Except for the minimum parasite drag of the wing, the preceding benefits include the offsetting effects of weight increase, suction power requirements, and drag of the wing-mounted suction pods. In a supplementary study using a scaled-down, rather than derated, version of the engine, on the LFC configuration, a 17.6 percent increase in mission time over the airplane without LFC and an incremental time increase of 10.2 percent over the LFC airplane with derated engine were attained. This improvement was due principally to reductions in both weight and drag of the scaled engine.
NASA Astrophysics Data System (ADS)
Di Luzio, Luca; Mescia, Federico; Nardi, Enrico
2017-01-01
A major goal of axion searches is to reach inside the parameter space region of realistic axion models. Currently, the boundaries of this region depend on somewhat arbitrary criteria, and it would be desirable to specify them in terms of precise phenomenological requirements. We consider hadronic axion models and classify the representations RQ of the new heavy quarks Q . By requiring that (i) the Q 's are sufficiently short lived to avoid issues with long-lived strongly interacting relics, (ii) no Landau poles are induced below the Planck scale; 15 cases are selected which define a phenomenologically preferred axion window bounded by a maximum (minimum) value of the axion-photon coupling about 2 times (4 times) larger than is commonly assumed. Allowing for more than one RQ, larger couplings, as well as complete axion-photon decoupling, become possible.
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
Time evolution of atmospheric particle number concentration during high-intensity pyrotechnic events
NASA Astrophysics Data System (ADS)
Crespo, Javier; Yubero, Eduardo; Nicolás, Jose F.; Caballero, Sandra; Galindo, Nuria
2014-10-01
The Mascletàs are high-intensity pyrotechnic events, typical of eastern Spanish festivals, in which thousands of firecrackers are burnt at ground level in an intense, short-time (<8 min) deafening spectacle that generates short-lived, thick aerosol clouds. In this study, the impact of such events on air quality has been evaluated by means of particle number concentration measurements performed close to the venue during the June festival in Alicante (southeastern Spain). Peak concentrations and dilution times observed throughout the Mascletàs have been compared to those measured when conventional aerial fireworks were launched 2 km away from the monitoring site. The impact of the Mascletàs on the total number concentration of particles larger than 0.3 μm was higher (maximum ∼2·104 cm-3) than that of fireworks (maximum ∼2·103 cm-3). The effect of fireworks depended on whether the dominant meteorological conditions favoured the transport of the plume to the measurement location. However, the time required for particle concentrations to return to background levels is longer and more variable for firework displays (minutes to hours) than for the Mascletàs (<25 min).
NASA Astrophysics Data System (ADS)
Ogilvie, K. W.; Coplan, M. A.; Roberts, D. A.; Ipavich, F.
2007-08-01
We calculate the cross-spacecraft maximum lagged-cross-correlation coefficients for 2-hour intervals of solar wind speed and density measurements made by the plasma instruments on the Solar and Heliospheric Observatory (SOHO) and Wind spacecraft over the period from 1996, the minimum of solar cycle 23, through the end of 2005. During this period, SOHO was located at L1, about 200 R E upstream from the Earth, while Wind spent most of the time in the interplanetary medium at distances of more than 100 R E from the Earth. Yearly histograms of the maximum, time-lagged correlation coefficients for both the speed and density are bimodal in shape, suggesting the existence of two distinct solar wind regimes. The larger correlation coefficients we suggest are due to structured solar wind, including discontinuities and shocks, while the smaller are likely due to Alfvénic turbulence. While further work will be required to firmly establish the physical nature of the two populations, the results of the analysis are consistent with a solar wind that consists of turbulence from quiet regions of the Sun interspersed with highly filamentary structures largely convected from regions in the inner solar corona. The bimodal appearance of the distributions is less evident in the solar wind speed than in the density correlations, consistent with the observation that the filamentary structures are convected with nearly constant speed by the time they reach 1 AU. We also find that at solar minimum the fits for the density correlations have smaller high-correlation components than at solar maximum. We interpret this as due to the presence of more relatively uniform Alfvénic regions at solar minimum than at solar maximum.
Reference values for rotational thromboelastometry (ROTEM) in clinically healthy cats.
Marly-Voquer, Charlotte; Riond, Barbara; Jud Schefer, Rahel; Kutter, Annette P N
2017-03-01
To establish reference intervals for rotational thromboelastometry (ROTEM) using feline blood. Prospective study. University teaching hospital. Twenty-three clinically healthy cats between 1 and 15 years. For each cat, whole blood was collected via jugular or medial saphenous venipuncture, and blood was placed into a serum tube, a tube containing potassium-EDTA, and tubes containing 3.2% sodium citrate. The tubes were maintained at 37°C for a maximum of 30 minutes before coagulation testing. ROTEM tests included the EXTEM, INTEM, FIBTEM, and APTEM assays. In addition, prothrombin time, activated partial thromboplastin time, thrombin time, and fibrinogen concentration (Clauss method) were analyzed for each cat. Reference intervals for ROTEM were calculated using the 2.5-97.5 th percentile for each parameter, and correlation with the standard coagulation profile was performed. Compared to people, clinically healthy cats had similar values for the EXTEM and INTEM assays, but had lower plasma fibrinogen concentrations (0.9-2.2 g/L), resulting in weaker maximum clot firmness (MCF, 3-10 mm) in the FIBTEM test. In 18 cats, maximum lysis (ML) values in the APTEM test were higher than in the EXTEM test, which seems unlikely to have occurred in the presence of aprotinin. It is possible that the observed high maximum lysis values were due to clot retraction rather than true clot lysis. Further studies will be required to test this hypothesis. Cats have a weaker clot in the FIBTEM test, but have a similar clot strength to human blood in the other ROTEM assays, which may be due to a stronger contribution of platelets compared to that found in people. In cats, careful interpretation of the results to diagnose hyperfibrinolysis is advised, especially with the APTEM test, until further data are available. © Veterinary Emergency and Critical Care Society 2017.
Kakagianni, Myrsini; Gougouli, Maria; Koutsoumanis, Konstantinos P
2016-08-01
The presence of Geobacillus stearothermophilus spores in evaporated milk constitutes an important quality problem for the milk industry. This study was undertaken to provide an approach in modelling the effect of temperature on G. stearothermophilus ATCC 7953 growth and in predicting spoilage of evaporated milk. The growth of G. stearothermophilus was monitored in tryptone soy broth at isothermal conditions (35-67 °C). The data derived were used to model the effect of temperature on G. stearothermophilus growth with a cardinal type model. The cardinal values of the model for the maximum specific growth rate were Tmin = 33.76 °C, Tmax = 68.14 °C, Topt = 61.82 °C and μopt = 2.068/h. The growth of G. stearothermophilus was assessed in evaporated milk at Topt in order to adjust the model to milk. The efficiency of the model in predicting G. stearothermophilus growth at non-isothermal conditions was evaluated by comparing predictions with observed growth under dynamic conditions and the results showed a good performance of the model. The model was further used to predict the time-to-spoilage (tts) of evaporated milk. The spoilage of this product caused by acid coagulation when the pH approached a level around 5.2, eight generations after G. stearothermophilus reached the maximum population density (Nmax). Based on the above, the tts was predicted from the growth model as the sum of the time required for the microorganism to multiply from the initial to the maximum level ( [Formula: see text] ), plus the time required after the [Formula: see text] to complete eight generations. The observed tts was very close to the predicted one indicating that the model is able to describe satisfactorily the growth of G. stearothermophilus and to provide realistic predictions for evaporated milk spoilage. Copyright © 2016 Elsevier Ltd. All rights reserved.
Magnesium requirement of some of the principal rumen cellulolytic bacteria.
Morales, M S; Dehority, B A
2014-09-01
Information available on the role of Mg for growth and cellulose degradation by rumen bacteria is both limited and inconsistent. In this study, the Mg requirements for two strains each of the cellulolytic rumen species Fibrobacter succinogenes (A3c and S85), Ruminococcus albus (7 and 8) and Ruminococcus flavefaciens (B34b and C94) were investigated. Maximum growth, rate of growth and lag time were all measured using a complete factorial design, 2(3)×6; factors were: strains (2), within species (3) and Mg concentrations (6). R. flavefaciens was the only species that did not grow when Mg was singly deleted from the media, and both strains exhibited a linear growth response to increasing Mg concentrations (P<0.001). The requirement for R. flavefaciens B34b was estimated as 0.54 mM; whereas the requirement for R. flavefaciens C94 was >0.82 as there was no plateau in growth. Although not an absolute requirement for growth, strains of the two other species of cellulolytic bacteria all responded to increasing Mg concentrations. For F. succinogenes S85, R. albus 7 and R. albus 8, their requirement estimated from maximum growth was 0.56, 0.52 and 0.51, respectively. A requirement for F. succinogenes A3c could not be calculated because there was no solution for contrasts. Whether R. flavefaciens had a Mg requirement for cellulose degradation was determined in NH3-free cellulose media, using a 2×4 factorial design, 2 strains and 4 treatments. Both strains of R. flavefaciens were found to have an absolute Mg requirement for cellulose degradation. Based on reported concentrations of Mg in the rumen, 1.0 to 10.1 mM, it seems unlikely that an in vivo deficiency of this element would occur.
29 CFR 778.101 - Maximum nonovertime hours.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Maximum nonovertime hours. 778.101 Section 778.101 Labor... Requirements Introductory § 778.101 Maximum nonovertime hours. As a general standard, section 7(a) of the Act provides 40 hours as the maximum number that an employee subject to its provisions may work for an employer...
Optical switching property of electromagnetically induced transparency in a Λ system
NASA Astrophysics Data System (ADS)
Zhang, Lianshui; Wang, Jian; Feng, Xiaomin; Yang, Lijun; Li, Xiaoli; Zhao, Min
2008-12-01
In this paper we study the coherent transient property of a Λ-three-level system (Ωd = 0) and a quasi- Λ -four-level system (Ωd>0). Optical switching of the probe field can be achieved by applying a pulsed coupling field or rf field. In Λ -shaped three-level system, when the coupling field was switched on, there is a almost total transparency of the probe field and the time required for the absorption changing from 90% to 10% of the maximum absorption is 2.9Γ0 (Γ0 is spontaneous emission lifetime). When the coupling field was switched off, there is an initial increase of the probe field absorption and then gradually evolves to the maximum of absorption of the two-level absorption, the time required for the absorption of the system changing from 10% to 90% is 4.2Γ0. In four-level system, where rf driving field is used as switching field, to achieve the same depth of the optical switching, the time of the optical switching is 2.5Γ0 and 6.1Γ0, respectively. The results show that with the same depth of the optical switching, the switch-on time of the four-level system is shorter than that of the three-level system, while the switch-off time of the four-level system is longer. The depth of the optical switching of the four-level system was much larger than that of the three-level system, where the depth of the optical switching of the latter is merely 14.8% of that of the former. The speed of optical switching of the two systems can be increased by the increase of Rabi frequency of coupling field or rf field.
[On the maximum population size of a modernized China from the perspective of food resources].
Song, J; Sun, Y
1981-04-25
The yearly per capita amounts of protein consumption between China, France, Japan, and the US are compared. Based on human dietary protein requirements and using quantitative methods and a mathematical model, the maximum population that can be supported by China's 9.6 million sq km of land after 2000 is calculated. The mathematical equation used is: ZGB + DBG = H; where ZGB is the ratio of plant protein over total protein required by the body consumed per unit time, and DGB is the ratio of animal protein over total protein required by the body consumed per unit time. H 1 means an excessive, H = 1 a balanced, and H 1 a deficient protein supply. Based on an average per capita daily energy requirement of 2,272 kcal, ZGB, DGB, and H are calculated for China, France, Japan and the US. The values between China and the US represent the extremes, with ZGB = 0.5776, DGB = 0.0813, and H = 0.6589 for China and ZGB = 0.2559, DGB = 0.8266, and H = 1.0825 for the US. Under the same conditions of land mass and unit production, the larger the H and DGB, the smaller a population that can be sustained. Conversely, the smaller the H and DGB, the larger the population that can be sustained. Using the values calculated by this equation, in order for China to attain a dietary level within 100 years comparable to that of the current US level, the Chinese population would have to be controlled to a size of about 680 million. A mathematical model for a balanced diet is given as an appendix.
24 CFR 242.23 - Maximum mortgage amounts and cash equity requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Maximum mortgage amounts and cash equity requirements. 242.23 Section 242.23 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING...
Stimulus-secretion coupling in chromaffin cells isolated from bovine adrenal medulla
Schneider, Allan S.; Herz, Ruth; Rosenheck, Kurt
1977-01-01
Bovine adrenal chromaffin cells were isolated by removal of the cortex and sequential collagenase digestion of the medulla. The catecholamine secretory function of these cells was characterized with respect to acetylcholine stimulation, cation requirements, and cytoskeletal elements. The dose-response curve for stimulated release had its half-maximum value at 10-5 M acetylcholine, and maximum secretion was on the average 7 times that of control basal secretion. The differential release of epinephrine versus norepinephrine after stimulation with 0.1 mM acetylcholine occurred in proportion to their distribution in the cell suspension. The cholinergic receptors were found to be predominantly nicotinic. The kinetics of catecholamine release were rapid, with significant secretion occurring in less than 60 sec and 85% of maximum secretion within 5 min. A critical requirement for calcium in the extracellular medium was demonstrated, and 80% of maximum secretion was achieved at physiologic calcium concentrations. Stimulation by excess potassium (65 mM KCl) also induced catecholamine secretion which differed from acetylcholine stimulation in being less potent, in having a different dependence on calcium concentration, and in its response to the local anesthetic tetracaine. Tetracaine, which is thought to inhibit membrane cation permeability, was able to block acetylcholine-stimulated but not KCl-stimulated secretion. The microtubule disrupting agent vinblastine was able to block catecholamine release whereas the microfilament disrupter cytochalasin B had little effect. The results show the isolated bovine chromaffin cells to be viable, functioning, and available in large quantity. These cells now provide an excellent system for studying cell surface regulation of hormone and neurotransmitter release. PMID:270738
Mission Analysis for LEO Microwave Power-Beaming Station in Orbital Launch of Microwave Lightcraft
NASA Technical Reports Server (NTRS)
Myrabo, L. N.; Dickenson, T.
2005-01-01
A detailed mission analysis study has been performed for a 1 km diameter, rechargeable satellite solar power station (SPS) designed to boost 20m diameter, 2400 kg Micr,oWave Lightcraft (MWLC) into low earth orbit (LEO) Positioned in a 476 km daily-repeating oi.bit, the 35 GHz microwave power station is configured like a spinning, thin-film bicycle wheel covered by 30% efficient sola cells on one side and billions of solid state microwave transmitter elements on the other, At the rim of this wheel are two superconducting magnets that can stor,e 2000 G.J of energy from the 320 MW, solar array over a period of several orbits. In preparation for launch, the entire station rotates to coarsely point at the Lightcraft, and then phases up using fine-pointing information sent from a beacon on-board the Lightcraft. Upon demand, the station transmits a 10 gigawatt microwave beam to lift the MWLC from the earth surface into LEO in a flight of several minutes duration. The mission analysis study was comprised of two parts: a) Power station assessment; and b) Analysis of MWLC dynamics during the ascent to orbit including the power-beaming relationships. The power station portion addressed eight critical issues: 1) Drag force vs. station orbital altitude; 2) Solar pressure force on the station; 3) Station orbital lifetime; 4) Feasibility of geo-magnetic re-boost; 5) Beta angle (i..e., sola1 alignment) and power station effective area relationship; 6) Power station percent time in sun vs, mission elapsed time; 7) Station beta angle vs.. charge time; 8) Stresses in station structures.. The launch dynamics portion examined four issues: 1) Ascent mission/trajecto1y profile; 2) MWLC/power-station mission geometry; 3) MWLC thrust angle vs. time; 4) Power station pitch rate during power beaming. Results indicate that approximately 0 58 N of drag force acts upon the station when rotated edge-on to project the minimum frontal area of 5000 sq m. An ion engine or perhaps an electrodynamic thruster (i.e., geomagnetic re-boost) station-keeping system can maintain the orbit altitude. The rate at which the power station s superconducting magnetic energy storage system (SMES) is 'charged' directly relates to the beta angle since the station is operating in the edge-on attitude. The maximum charge rate occurs when the beta angle is at its maximum because time in the sun and projected area of the station are, too, at their maximums For the maximum charge of 2000 G.J with a maximum beta angle of 52 degrees, approximately 3 hours (2 orbital revolutions) are required to reach the full charge, while about 16 hours (10.3 revolutions) are required when the beta angle is 10 degrees. Overall, the LEO station concept appears to be a viable candidate fo1 the formidable power-beaming infrastructure needed to boost MWLC into low earth orbit.
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
3D Finite Element Analysis of Spider Non-isothermal Forging Process
NASA Astrophysics Data System (ADS)
Niu, Ling; Wei, Wei; Wei, Kun Xia; Alexandrov, Igor V.; Hu, Jing
2016-06-01
The differences of effective stress, effective strain, velocity field, and the load-time curves between the spider isothermal and non-isothermal forging processes are investigated by making full use of 3D FEA, and verified by the production experiment of spider forging. Effective stress is mainly concentrated on the pin, and becomes lower closer to the front of the pin. The maximum effective strain in the non-isothermal forging is lower than that in the isothermal. The great majority of strain in the non-isothermal forging process is 1.76, which is larger than the strain of 1.31 in the isothermal forging. The maximum load required in the isothermal forging is higher than that in the non-isothermal. The maximum experimental load and deformation temperature in the spider production are in good agreement with those in the non-isothermal FEA. The results indicate that the non-isothermal 3D FEA results can guide the design of the spider forging process.
Presence of arsenic in pet food: a real hazard?
Squadrone, Stefania; Brizio, Paola; Simone, Giuseppe; Benedetto, Alessandro; Monaco, Gabriella; Abete, Maria Cesarina
2017-12-29
In this study, arsenic content in 200 cat- and dog-food samples was estimated by means of electro thermal atomic absorption (Z-ETA-AAS), after using the wet digestion method, that were imported or commercialised in Italy from 2007 to 2012. The maximum value of total arsenic (As) in the samples was 12.5 mg kg-1. Some imported pet food was intercepted as a result of the Rapid Alert System for Food and Feed (RASFF) and rejected at the border or withdrawn from the Italian market, because they exceeded the maximum level of arsenic content imposed in Italy at the time of this study (2002/32/EC). All the samples with a signi cant arsenic level were sh-based. Recently, the 2013/1275/EC raised the maximum level of As permitted in sh-based pet food. However, the analysis of As species is required (EFSA 2014) in order to identify correctly the di erent contributions of dietary exposure to inorganic As and to assure pet food quality.
Late-time Flattening of Type Ia Supernova Light Curves: Constraints from SN 2014J in M82
NASA Astrophysics Data System (ADS)
Yang, Yi; Wang, Lifan; Baade, Dietrich; Brown, Peter. J.; Cikota, Aleksandar; Cracraft, Misty; Höflich, Peter A.; Maund, Justyn R.; Patat, Ferdinando; Sparks, William B.; Spyromilio, Jason; Stevance, Heloise F.; Wang, Xiaofeng; Wheeler, J. Craig
2018-01-01
The very nearby Type Ia supernova 2014J in M82 offers a rare opportunity to study the physics of thermonuclear supernovae at extremely late phases (≳800 days). Using the Hubble Space Telescope, we obtained 6 epochs of high-precision photometry for SN 2014J from 277 days to 1181 days past the B-band maximum light. The reprocessing of electrons and X-rays emitted by the radioactive decay chain {}57{Co}\\to {}57{Fe} is needed to explain the significant flattening of both the F606W-band and the pseudo-bolometric light curves. The flattening confirms previous predictions that the late-time evolution of type Ia supernova luminosities requires additional energy input from the decay of 57Co. By assuming the F606W-band luminosity scales with the bolometric luminosity at ∼500 days after the B-band maximum light, a mass ratio {}57{Ni}{/}56{Ni}∼ {0.065}-0.004+0.005 is required. This mass ratio is roughly ∼3 times the solar ratio and favors a progenitor white dwarf with a mass near the Chandrasekhar limit. A similar fit using the constructed pseudo-bolometric luminosity gives a mass ratio {}57{Ni}{/}56{Ni}∼ {0.066}-0.008+0.009. Astrometric tests based on the multi-epoch HST ACS/WFC images reveal no significant circumstellar light echoes in between 0.3 and 100 pc from the supernova.
Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum
2014-12-02
Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.
NASA Technical Reports Server (NTRS)
Ofek, E.O; Fox, D.; Cenko, B.; Sullivan, M.; Gnat, O.; Frail A.; Horesh, A.; Corsi, A; Quimby, R. M.; Gehrels, N.;
2012-01-01
The optical light curve of some supernovae (SNe) may be powered by the outward diffusion of the energy deposited by the explosion shock (so-called shock breakout) in optically thick (tau approx > 30) circumstellar matter (CSM). Recently, it was shown that the radiation-mediated and -dominated shock in an optically thick wind must transform into 8. collisionless shock and can produce hard X-rays. The X-rays are expected to peak at late times, relative to maximum visible light. Here we report on a search, using Swift-XRT and Chandra, for X-ray emission from 28 SNe that belong to classes whose progenitors are suspected to be embedded in dense CSM. Our sample includes 19 type-IIn SNe, one type-Ibn SN and eiht hydrogen-poor super-luminous SNe (SLSN-I; SN 2005ap like). Two SNe (SN 2006jc and SN 2010jl) have X-ray properties that are roughly consistent with the expectation for X-rays from a collisionless shock in optically thick CSl\\l. Therefore, we suggest that their optical light curves are powered by shock breakout in CSM. We show that two other events (SN 2010al and SN 2011ht) were too X-ray bright during the SN maximum optical light to be explained by the shock breakout model. We conclude that the light curves of some, but not all, type-IIn/Ibn SNe are powered by shock breakout in CSM. For the rest of the SNe in our sample, including all the SLSN-I events, our X-ray limits are not deep enough and were typically obtained at too early times (i.e., near the SN maximum light) to conclude about their nature. Late time X-ray observations are required in order to further test if these SNe are indeed embedded in dense CSM. We review the conditions required for a shock breakOut in a wind profile. We argue that the time scale, relative to maximum light, for the SN to peak in X-rays is a probe of the column density and the density profile above the shock region. The optical light curves of SNe, for which the X-ray emission peaks at late times, are likely powered by the diffusion of shock energy from a dense CSM. We note that if the CSM density profile falls faster than a constant-rate wind density profile, then X-rays may escape at earlier times than estimated for the wind profile case. Furthermore, if the CSM have a region in which the density profile is very steep, relative to a steady wind density profile, or the CSM is neutral, then the radio free-free absorption may be low enough, and radio emission may be detected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Lin, M; Chen, L
Purpose: Recent in vitro and in vivo experimental findings provided strong evidence that pulsed low-dose-rate radiotherapy (PLDR) produced equivalent tumor control as conventional radiotherapy with significantly reduced normal tissue toxicities. This work aimed to implement a PLDR clinical protocol for the management of recurrent cancers utilizing IMRT and VMAT. Methods: Our PLDR protocol requires that the daily 2Gy dose be delivered in 0.2Gy×10 pulses with a 3min interval between the pulses. To take advantage of low-dose hyper-radiosensitivity the mean dose to the target is set at 0.2Gy and the maximum dose is limited to 0.4Gy per pulse. Practical planning strategiesmore » were developed for IMRT and VMAT: (1) set 10 ports for IMRT and 10 arcs for VMAT with each angle/arc as a pulse; (2) set the mean dose (0.2Gy) and maximum dose (0.4Gy) to the target per pulse as hard constraints (no constraints to OARs); (3) select optimal port/arc angles to avoid OARs; and (4) use reference structures in or around target/OARs to reduce maximum dose to the target/OARs. IMRT, VMAT and 3DCRT plans were generated for 60 H and N, breast, lung, pancreas and prostate patients and compared. Results: All PLDR treatment plans using IMRT and VMAT met the dosimetry requirements of the PLDR protocol (mean target dose: 0.20Gy±0.01Gy; maximum target dose < 0.4Gy). In comparison with 3DCRT, IMRT and VMAT exhibited improved target dose conformity and OAR dose sparing. A single arc can minimize the difference in the target dose due to multi-angle incidence although the delivery time is longer than 3DCRT and IMRT. Conclusion: IMRT and VMAT are better modalities for PLDR treatment of recurrent cancers with superior target dose conformity and critical structure sparing. The planning strategies/guidelines developed in this work are practical for IMRT/VMAT treatment planning to meet the dosimetry requirements of the PLDR protocol.« less
Kinyua, Maureen N; Cunningham, Jeffrey; Ergas, Sarina J
2014-06-01
Anaerobic digestion (AD) can be used to stabilize and produce energy from livestock waste; however, digester effluents may require further treatment to remove nitrogen. This paper quantifies the effects of varying solids retention time (SRT) methane yield, volatile solids (VS) reduction and organic carbon bioavailability for denitrification during swine waste AD. Four bench-scale anaerobic digesters, with SRTs of 14, 21, 28 and 42 days, operated with swine waste feed. Effluent organic carbon bioavailability was measured using anoxic microcosms and respirometry. Excellent performance was observed for all four digesters, with >60% VS removal and CH4 yields between 0.1 and 0.3(m(3)CH4)/(kg VS added). Organic carbon in the centrate as an internal organic carbon source for denitrification supported maximum specific denitrification rates between 47 and 56(mg NO3(-)-N)/(g VSS h). The digester with the 21-day SRT had the highest CH4 yield and maximum specific denitrification rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimization of vehicle weight for Mars excursion missions
NASA Technical Reports Server (NTRS)
Ferebee, Melvin J., Jr.
1991-01-01
The excursion class mission to Mars is defined as a mission with one year durations coupled with a stay time of up to 30 days. The fuel budget for such a mission is investigated. The overall figure of merit in such an assessment is the vehicle weight ratio, the ratio between the wet vehicle weight to the dry vehicle weight. It is necessary to minimize the overall fuel budget for the mission in order to maximize the benefits that could be obtained by sending humans to Mars. Assumptions used in the analysis are: each mission will depart and terminate in low-earth-orbit (LEO) (500 km circular) and the maximum stay time at Mars is 30 days. The maximum mission duration is one year (355-375 days). The mass returned to LEO is 135,000 kg, the dropoff mass left at Mars is 168,000 kg. Three propulsive techniques for atmospheric interface are investigated: aerobraking, all-chemical propulsion, and nuclear thermal propulsion. By defining the fuel requirements, the space transfer vehicle's configuration is defined.
PHOTOSENSITIVE RELAY CONTROL CIRCUIT
Martin, C.F.
1958-01-14
adapted for the measurement of the time required for an oscillating member to pass through a preselected number of oscillations, after being damped to a certain maximum amplitude of oscillation. A mirror is attached to the moving member and directs light successively to a photocell which is part of a trigger unit and to first and second photocells which are part of a starter unit, as the member swings to its maximum amplitude. The starter and trigger units comprise thyratrons and relays so interconnected that the trigger circuit, although generating a counter pulse, does not register a count in the counter when the light traverses both photocells of the starter unit. When the amplitude of oscillation of the member decreases to where the second photocell is not transversed, the triggei pulse is received by the counter. The counter taen operates to register the desired number of oscillations and initiates and terminates a timer for measuring the time irterval for the preselected number of oscillations.
Souza, Gérson F; Moreira, Graciane L; Tufanin, Andréa; Gazzotti, Mariana R; Castro, Antonio A; Jardim, José R; Nascimento, Oliver A
2017-08-01
The Glittre activities of daily living (ADL) test is supposed to evaluate the functional capacity of COPD patients. The physiological requirements of the test and the time taken to perform it by COPD patients in different disease stages are not well known. The objective of this work was to compare the metabolic, ventilatory, and cardiac requirements and the time taken to carry out the Glittre ADL test by COPD subjects with mild, moderate, and severe disease. Spirometry, Medical Research Council questionnaire, cardiopulmonary exercise test, and 2 Glittre ADL tests were evaluated in 62 COPD subjects. Oxygen uptake (V̇ O 2 ), carbon dioxide production, pulmonary ventilation, breathing frequency, heart rate, S pO 2 , and dyspnea were analyzed before and at the end of the tests. Maximum voluntary ventilation, Glittre peak V̇ O 2 /cardiopulmonary exercise test (CPET) peak V̇ O 2 , Glittre V̇ E /maximum voluntary ventilation, and Glittre peak heart rate/CPET peak heart rate ratios were calculated to analyze their reserves. Subjects carried out the Glittre ADL test with similar absolute metabolic, ventilatory, and cardiac requirements. Ventilatory reserve decreased progressively from mild to severe COPD subjects ( P < .001 for Global Initiative for Chronic Obstructive Lung Disease [GOLD] 1 vs GOLD 2, P < .001 for GOLD 1 vs GOLD 3, and P < .001 for GOLD 2 vs GOLD 3). Severe subjects with COPD presented a significantly lower metabolic reserve than the mild and moderate subjects ( P = .006 and P = .043, respectively) and significantly lower Glittre peak heart rate/CPET peak heart rate than mild subjects ( P = .01). Time taken to carry out the Glittre ADL test was similar among the groups ( P = .82 for GOLD 1 vs GOLD 2, P = .19 for GOLD 1 vs GOLD 3, and P = .45 for GOLD 2 vs GOLD 3). As the degree of air-flow obstruction progresses, the COPD subjects present significant lower ventilatory reserve to perform the Glittre ADL test. In addition, metabolic and cardiac reserves may differentiate the severe subjects. These variables may be better measures to differentiate functional performance than Glittre ADL time. Copyright © 2017 by Daedalus Enterprises.
Use of iodine for water disinfection: iodine toxicity and maximum recommended dose.
Backer, H; Hollowell, J
2000-01-01
Iodine is an effective, simple, and cost-efficient means of water disinfection for people who vacation, travel, or work in areas where municipal water treatment is not reliable. However, there is considerable controversy about the maximum safe iodine dose and duration of use when iodine is ingested in excess of the recommended daily dietary amount. The major health effect of concern with excess iodine ingestion is thyroid disorders, primarily hypothyroidism with or without iodine-induced goiter. A review of the human trials on the safety of iodine ingestion indicates that neither the maximum recommended dietary dose (2 mg/day) nor the maximum recommended duration of use (3 weeks) has a firm basis. Rather than a clear threshold response level or a linear and temporal dose-response relationship between iodine intake and thyroid function, there appears to be marked individual sensitivity, often resulting from unmasking of underlying thyroid disease. The use of iodine for water disinfection requires a risk-benefit decision based on iodine's benefit as a disinfectant and the changes it induces in thyroid physiology. By using appropriate disinfection techniques and monitoring thyroid function, most people can use iodine for water treatment over a prolonged period of time. PMID:10964787
NASA Astrophysics Data System (ADS)
Ginsburg, B. R.
The design criteria, materials, and initial test results of composite flywheels produced under DOE/Sandia contract are reported. The flywheels were required to store from 1-5 kWh with a total energy density of 80 W-h/kg at the maximum operational speed. The maximum diameter was set at 0.6 m, coupled to a maximum thickness of 0.2 m. A maximum running time at full speed of 1000 hr, in addition to a 10,000 cycle lifetime was mandated, together with a radial overlap in the material. The unit selected was a circumferentially wound composite rim made of graphite/epoxy mounted on an aluminum mandrel ring connected to an aluminum hub consisting of two constant stress disks. A tangentially wound graphite/epoxy overlap covered the rings. All conditions, i.e., rotation at 22,000 rpm and a measured storage of 1.94 kWh were verified in the first test series, although a second flywheel failed in subsequent tests when the temperature was inadvertantly allowed to rise from 15 F to over 200 F. Retest of the first flywheel again satisfied design goals. The units are considered as ideal for coupling with solar energy and wind turbine systems.
Investigations of Novel Surface Modification Techniques for Wear Resistant Al and Mg Based Materials
1994-01-01
microhardness to resist the abrasive wear. Moreover it is required to form dense or fine-porous uniform layers to provide the antifriction characteristics...technological regimes for production of OCC having maximum of thickness, microhardness and uniformity is expediently to carry on using the silicate-alkali...includes at the same time both the index of the process effectiveness and the strength and geometrical characteristics of the product . In connection
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Design Spectrum Analysis in NASTRAN
NASA Technical Reports Server (NTRS)
Butler, T. G.
1984-01-01
The utility of Design Spectrum Analysis is to give a mode by mode characterization of the behavior of a design under a given loading. The theory of design spectrum is discussed after operations are explained. User instructions are taken up here in three parts: Transient Preface, Maximum Envelope Spectrum, and RMS Average Spectrum followed by a Summary Table. A single DMAP ALTER packet will provide for all parts of the design spectrum operations. The starting point for getting a modal break-down of the response to acceleration loading is the Modal Transient rigid format. After eigenvalue extraction, modal vectors need to be isolated in the full set of physical coordinates (P-sized as opposed to the D-sized vectors in RF 12). After integration for transient response the results are scanned over the solution time interval for the peak values and for the times that they occur. A module called SCAN was written to do this job, that organizes these maxima into a diagonal output matrix. The maximum amplifier in each mode is applied to the eigenvector of each mode which then reveals the maximum displacements, stresses, forces and boundary reactions that the structure will experience for a load history, mode by mode. The standard NASTRAN output processors have been modified for this task. It is required that modes be normalized to mass.
First passage times in homogeneous nucleation: Dependence on the total number of particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yvinec, Romain; Bernard, Samuel; Pujo-Menjouet, Laurent
2016-01-21
Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find thatmore » the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.« less
First passage times in homogeneous nucleation: Dependence on the total number of particles
NASA Astrophysics Data System (ADS)
Yvinec, Romain; Bernard, Samuel; Hingant, Erwan; Pujo-Menjouet, Laurent
2016-01-01
Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.
Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra
NASA Astrophysics Data System (ADS)
Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul
2007-06-01
Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.
Maximum kinetic energy considerations in proton stereotactic radiosurgery.
Sengbusch, Evan R; Mackie, Thomas R
2011-04-12
The purpose of this study was to determine the maximum proton kinetic energy required to treat a given percentage of patients eligible for stereotactic radiosurgery (SRS) with coplanar arc-based proton therapy, contingent upon the number and location of gantry angles used. Treatment plans from 100 consecutive patients treated with SRS at the University of Wisconsin Carbone Cancer Center between June of 2007 and March of 2010 were analyzed. For each target volume within each patient, in-house software was used to place proton pencil beam spots over the distal surface of the target volume from 51 equally-spaced gantry angles of up to 360°. For each beam spot, the radiological path length from the surface of the patient to the distal boundary of the target was then calculated along a ray from the gantry location to the location of the beam spot. This data was used to generate a maximum proton energy requirement for each patient as a function of the arc length that would be spanned by the gantry angles used in a given treatment. If only a single treatment angle is required, 100% of the patients included in the study could be treated by a proton beam with a maximum kinetic energy of 118 MeV. As the length of the treatment arc is increased to 90°, 180°, 270°, and 360°, the maximum energy requirement increases to 127, 145, 156, and 179 MeV, respectively. A very high percentage of SRS patients could be treated at relatively low proton energies if the gantry angles used in the treatment plan do not span a large treatment arc. Maximum proton kinetic energy requirements increase linearly with size of the treatment arc.
NASA Astrophysics Data System (ADS)
Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.
2015-01-01
The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods overpredict maximum magnitude for this area or that long time delays are required for sufficient pore-pressure diffusion to occur to cause rupture along an entire fault segment. We note that earthquake clusters can initiate and grow rapidly over the course of 1 or 2 yr, thus making it difficult to predict maximum earthquake magnitudes far into the future. The abrupt onset of seismicity with injection indicates that pore-pressure increases near the well have been sufficient to trigger earthquakes under pre-existing tectonic stresses. However, we do not observe remote triggering from large teleseismic earthquakes, which suggests that the stress perturbations generated from those events are too small to trigger rupture, even with the increased pore pressures.
Yang, Lili; Suzuki, Eduardo Yugo; Suzuki, Boonsiva
2014-01-01
Purposes: The purpose of this study was to compare the distraction forces and the biomechanical effects between two different intraoperative surgical procedures (down-fracture [DF] and non-DF [NDF]) for maxillary distraction osteogenesis. Materials and Methods: Eight patients were assigned into two groups according to the surgical procedure: DF, n = 6 versus NDF, n = 2. Lateral cephalograms taken preoperatively (T1), immediately after removal of the distraction device (T2), and after at least a 6 months follow-up period (T3) were analyzed. Assessment of distraction forces was performed during the distraction period. The Mann–Whitney U-test was used to compare the difference in the amount of advancement, the maximum distraction force and the amount of relapse. Results: Although a significantly greater amount of maxillary movement was observed in the DF group (median 9.5 mm; minimum-maximum 7.9-14.1 mm) than in the NDF group (median 5.9 mm; minimum-maximum 4.4-7.6 mm), significantly lower maximum distraction forces were observed in the DF (median 16.4 N; minimum-maximum 15.1-24.6 N) than in the NDF (median 32.9 N; minimum-maximum 27.6-38.2 N) group. A significantly greater amount of dental anchorage loss was observed in the NDF group. Moreover, the amount of relapse observed in the NDF group was approximately 3.5 times greater than in the DF group. Conclusions: In this study, it seemed that, the use of the NDF procedure resulted in lower levels of maxillary mobility at the time of the maxillary distraction, consequently requiring greater amounts of force to advance the maxillary bone. Moreover, it also resulted in a reduced amount of maxillary movement, a greater amount of dental anchorage loss and poor treatment stability. PMID:25593865
A microprocessor based high speed packet switch for satellite communications
NASA Technical Reports Server (NTRS)
Arozullah, M.; Crist, S. C.
1980-01-01
The architectures of a single processor, a three processor, and a multiple processor system are described. The hardware circuits, and software routines required for implementing the three and multiple processor designs are presented. A bit-slice microprocessor was designed and microprogrammed. Maximum throughput was calculated for all three designs. Queue theoretic models for these three designs were developed and utilized to obtain analytical expressions for the average waiting times, overall average response times and average queue sizes. From these expressions, graphs were obtained showing the effect on the system performance of a number of design parameters.
Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems
NASA Technical Reports Server (NTRS)
Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.
2004-01-01
Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.
30 CFR 7.84 - Technical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Underground Coal Mines § 7.84 Technical requirements. (a) Fuel injection adjustment. The fuel injection system of the engine shall be constructed so that the quantity of fuel injected can be controlled at a... design. (b) Maximum fuel-air ratio. At the maximum fuel-air ratio determined by § 7.87 of this part, the...
30 CFR 7.84 - Technical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Underground Coal Mines § 7.84 Technical requirements. (a) Fuel injection adjustment. The fuel injection system of the engine shall be constructed so that the quantity of fuel injected can be controlled at a... design. (b) Maximum fuel-air ratio. At the maximum fuel-air ratio determined by § 7.87 of this part, the...
30 CFR 7.84 - Technical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Underground Coal Mines § 7.84 Technical requirements. (a) Fuel injection adjustment. The fuel injection system of the engine shall be constructed so that the quantity of fuel injected can be controlled at a... design. (b) Maximum fuel-air ratio. At the maximum fuel-air ratio determined by § 7.87 of this part, the...
30 CFR 7.84 - Technical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Underground Coal Mines § 7.84 Technical requirements. (a) Fuel injection adjustment. The fuel injection system of the engine shall be constructed so that the quantity of fuel injected can be controlled at a... design. (b) Maximum fuel-air ratio. At the maximum fuel-air ratio determined by § 7.87 of this part, the...
30 CFR 7.84 - Technical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Underground Coal Mines § 7.84 Technical requirements. (a) Fuel injection adjustment. The fuel injection system of the engine shall be constructed so that the quantity of fuel injected can be controlled at a... design. (b) Maximum fuel-air ratio. At the maximum fuel-air ratio determined by § 7.87 of this part, the...
NASA Technical Reports Server (NTRS)
Williams, J. L.; Copeland, R. J.; Nebbon, B. W.
1972-01-01
The most promising closed CO2 control concept identified by this study is the solid pellet, Mg(OH2)2 system. Two promising approaches to closed thermal control were identified. The AHS system uses modular fusible heat sinks, with a contingency evaporative mode, to allow maximum EVA mobility. The AHS/refrigerator top-off subsystem requires an umbilical to minimize expendables, but less EVA time is used to operate the system, since there is no requirement to change modules. Both of these subsystems are thought to be practical solutions to the problem of providing closed heat rejection for an EVA system.
Storage peak gas-turbine power unit
NASA Technical Reports Server (NTRS)
Tsinkotski, B.
1980-01-01
A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.
Surface waters of Elk Creek basin in southwestern Oklahoma
Westfall, A.O.
1963-01-01
The purpose of this study is to (1) determine the average discharge during a period that is representative of average streamflow conditions, (2) determine the range of discharge, and (3) determine the storage required to supplement natural flows during drought periods. Elk Creek drains 587 square miles of the North Fork Red River basin. The climate is subhumid, and precipitation averages about 23 inches per year. The average discharge at the gaging station near Hobart is 50 cfs (cubic feet per second) or 36,200 acre-feet per year during a 19-year base period, water years 1938-56. The yearly average discharge ranged from 4.6 cfs in 1940 to 146 cfs in 1957. Maximum runoff generally occurs during May and June. The maximum monthly runoff was 64,520 acre-feet in May 1957. The maximum yearly runoff was 105,500 acre-feet in 1957. There is no sustained base flow in the basin. Severe droughts occurred in 1938-40 and 1952-56. The most extended drought occurred from June 1951 to March 1957, during which time there was a prolonged period of no flow of 182 days in 1954-55. A usable storage of 28,000 acre-feet would have been required to provide a regulated discharge of 1,500 acre-feet per month throughout these drought periods. (available as photostat copy only)
Global discrimination of land cover types from metrics derived from AVHRR pathfinder data
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeFries, R.; Hansen, M.; Townshend, J.
1995-12-01
Global data sets of land cover are a significant requirement for global biogeochemical and climate models. Remotely sensed satellite data is an increasingly attractive source for deriving these data sets due to the resulting internal consistency, reproducibility, and coverage in locations where ground knowledge is sparse. Seasonal changes in the greenness of vegetation, described in remotely sensed data as changes in the normalized difference vegetation index (NDVI) throughout the year, have been the basis for discriminating between cover types in previous attempts to derive land cover from AVHRR data at global and continental scales. This study examines the use ofmore » metrics derived from the NDVI temporal profile, as well as metrics derived from observations in red, infrared, and thermal bands, to improve discrimination between 12 cover types on a global scale. According to separability measures calculated from Bhattacharya distances, average separabilities improved by using 12 of the 16 metrics tested (1.97) compared to separabilities using 12 monthly NDVI values alone (1.88). Overall, the most robust metrics for discriminating between cover types were: mean NDVI, maximum NDVI, NDVI amplitude, AVHRR Band 2 (near-infrared reflectance) and Band 1 (red reflectance) corresponding to the time of maximum NDVI, and maximum land surface temperature. Deciduous and evergreen vegetation can be distinguished by mean NDVI, maximum NDVI, NDVI amplitude, and maximum land surface temperature. Needleleaf and broadleaf vegetation can be distinguished by either mean NDVI and NDVI amplitude or maximum NDVI and NDVI amplitude.« less
NASA Technical Reports Server (NTRS)
Macdoran, P. F. (Inventor)
1984-01-01
The columnar electron content of the ionosphere between a spacecraft and a receiver is measured in realtime by cross correlating two coherently modulated signals transmitted at different frequencies (L1,L2) from the spacecraft to the receiver using a cross correlator. The time difference of arrival of the modulated signals is proportional to electron content of the ionosphere. A variable delay is adjusted relative to a fixed delay in the respective channels (L1,L2) to produce a maximum at the cross correlator output. The difference in delay required to produce this maximum is a measure of the columnar electron content of the ionosphere. A plurality of monitoring stations and spacecraft (Global Positioning System satellites) are employed to locate any terrestrial event that produces an ionospheric disturbance.
Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity
NASA Astrophysics Data System (ADS)
Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio
2013-03-01
A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.
Robust modular product family design
NASA Astrophysics Data System (ADS)
Jiang, Lan; Allada, Venkat
2001-10-01
This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.
Modular thermal analyzer routine, volume 1
NASA Technical Reports Server (NTRS)
Oren, J. A.; Phillips, M. A.; Williams, D. R.
1972-01-01
The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.
Gemini primary mirror in situ wash
NASA Astrophysics Data System (ADS)
Vucina, Tomislav; Boccas, Maxime; Araya, Claudio; Ah Hee, Clayton; Cavedoni, Chas
2008-07-01
The Gemini twins were the first large modern telescopes to receive protected silver coatings on their mirrors in 2004. The low emissivity requirement is fundamental for the IR optimization. In the mid-IR a factor of two reduction in telescope emissivity is equivalent to increasing the collecting area by the same factor. Our emissivity maintenance requirement is very stringent: 0.5% maximum degradation during operations, at any single wavelength beyond 2.2 μm. We developed a very rigorous standard to wash the primary mirrors in the telescope without science down time. The in-situ washes are made regularly, and the reflectivity and emissivity gains are significant. The coating lifetime has been extended far more than our original expectations. In this report we describe the in-situ process and hardware, explain our maintenance plan, and show results of the coating performance over time.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
In orbit adiabatic demagnetization refrigeration for bolometric and microcalorimetric detectors
NASA Astrophysics Data System (ADS)
Hepburn, I. D.; Ade, P. A. R.; Davenport, I.; Smith, A.; Sumner, T. J.
1992-12-01
The new generation of photon detectors for satellite based mm/submm and X-ray astronomical observations require cooling to temperatures in the range 60 to 300 mK. At present Adiabatic Demagnetization Refrigeration (ADR) is the best proposed technique for producing these temperatures in orbit due to its inherent simplicity and gravity independent operation. For the efficient utilization of an ADR it is important to realize long operational times at base temperature with short recycle times. These criteria are dependent on several parameters; the required operating temperature, the cryogen bath temperature, the amount of heat leakage to the paramagnetic salt, the volume and type of salt and the maximum obtainable magnetic field. For space application these parameters are restricted by the limitations imposed on the physical size, the mass, the available electrical power and the cooling power available. The design considerations required in order to match these parameters are described and test data from a working laboratory system is presented.
Fetterman, J Gregor; Killeen, P Richard
2010-09-01
Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-10-01
... maximum amount of $11.35 for services furnished in a hospital emergency room if those services are not... 42 Public Health 4 2012-10-01 2012-10-01 false Maximum allowable cost-sharing charges on targeted... Requirements: Enrollee Financial Responsibilities § 457.555 Maximum allowable cost-sharing charges on targeted...
24 CFR 200.15 - Maximum mortgage.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Maximum mortgage. 200.15 Section 200.15 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... Eligibility Requirements for Existing Projects Eligible Mortgage § 200.15 Maximum mortgage. Mortgages must not...
48 CFR 32.503-12 - Maximum unliquidated amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Maximum unliquidated amount. 32.503-12 Section 32.503-12 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING Progress Payments Based on Costs 32.503-12 Maximum...
Alluri, Nagamalleswara Rao; Vivekananthan, Venkateswaran; Chandrasekhar, Arunkumar; Kim, Sang-Jae
2018-01-18
Contrary to traditional planar flexible piezoelectric nanogenerators (PNGs), highly adaptable hemispherical shape-flexible piezoelectric composite strip (HS-FPCS) based PNGs are required to harness/measure non-linear surface motions. Therefore, a feasible, cost-effective and less-time consuming groove technique was developed to fabricate adaptable HS-FPCSs with multiple lengths. A single HS-CSPNG generates 130 V/0.8 μA and can also work as a self-powered muscle monitoring system (SP-MMS) to measure maximum human body part movements, i.e., spinal cord, throat, jaw, elbow, knee, foot stress, palm hand/finger force and inhale/exhale breath conditions at a time or at variable time intervals.
77 FR 65506 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
...We propose to supersede an existing airworthiness directive (AD) that applies to certain The Boeing Company Model 757-200 and - 200PF series airplanes. The existing AD currently requires modification of the nacelle strut and wing structure, and repair of any damage found during the modification. Since we issued that AD, a compliance time error involving the optional threshold formula was discovered, which could allow an airplane to exceed the acceptable compliance time for addressing the unsafe condition. This proposed AD would specify a maximum compliance time limit that overrides the optional threshold formula results. We are proposing this AD to prevent fatigue cracking in primary strut structure and consequent reduced structural integrity of the strut.
Experimental investigation of drying characteristics of cornelian cherry fruits ( Cornus mas L.)
NASA Astrophysics Data System (ADS)
Ozgen, Filiz
2015-03-01
Major target of present paper is to investigate the drying kinetics of cornelian cherry fruits ( Cornus mas L.) in a convective dryer, by varying the temperature and the velocity of drying air. Freshly harvested fruits are dried at drying air temperature of 35, 45 and 55 °C. The considered drying air velocities are V air = 1 and 1.5 m/s for each temperature. The required drying time is determined by taking into consideration the moisture ratio measurements. When the moisture ratio reaches up to 10 % at the selected drying air temperature, then the time is determined ( t = 40-67 h). The moisture ratio, fruit temperature and energy requirement are presented as the functions of drying time. The lowest drying time (40 h) is obtained when the air temperature is 55 °C and air velocity is 1.5 m/s. The highest drying time (67 h) is found under the conditions of 35 °C temperature and 1 m/s velocity. Both the drying air temperature and the air velocity significantly affect the required energy for drying system. The minimum amount of required energy is found as 51.12 kWh, at 55 °C and 1 m/s, whilst the maximum energy requirement is 106.7 kWh, at 35 °C and 1.5 m/s. It is also found that, air temperature significantly influences the total drying time. Moreover, the energy consumption is decreasing with increasing air temperature. The effects of three parameters (air temperature, air velocity and drying time) on drying characteristics have also been analysed by means of analysis of variance method to show the effecting levels. The experimental results have a good agreement with the predicted ones.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
The Ion Propulsion System on NASA's Space Technology 4/Champollion Comet Rendezvous Mission
NASA Technical Reports Server (NTRS)
Brophy, John R.; Garner, Charles E.; Weiss, Jeffery M.
1999-01-01
The ST4/Champollion mission is designed to rendezvous with and land on the comet Tempel 1 and return data from the first-ever sampling of a comet surface. Ion propulsion is an enabling technology for this mission. The ion propulsion system on ST4 consists of three ion engines each essentially identical to the single engine that flew on the DS1 spacecraft. The ST4 propulsion system will operate at a maximum input power of 7.5 kW (3.4 times greater than that demonstrated on DS1), will produce a maximum thrust of 276 mN, and will provide a total (Delta)V of 11.4 km/s. To accomplish this the propulsion system will carry 385 kg of xenon. All three engines will be operated simultaneously for the first 168 days of the mission. The nominal mission requires that each engine be capable of processing 118 kg. If one engine fails after 168 days, the remaining two engines can perform the mission, but must be capable of processing 160 kg of xenon, or twice the original thruster design requirement. Detailed analyses of the thruster wear-out failure modes coupled with experience from long-duration engine tests indicate that the thrusters have a high probability of meeting the 160-kg throughput requirement.
Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.
Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G
2016-11-01
Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.
Robust cell tracking in epithelial tissues through identification of maximum common subgraphs
Bardenet, Rémi; Zartman, Jeremiah J.; Baker, Ruth E.
2016-01-01
Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a ‘maximum common subgraph’ to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell–cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. PMID:28334699
Cognitive task analysis-based design and authoring software for simulation training.
Munro, Allen; Clark, Richard E
2013-10-01
The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Effect of power system technology and mission requirements on high altitude long endurance aircraft
NASA Technical Reports Server (NTRS)
Colozza, Anthony J.
1994-01-01
An analysis was performed to determine how various power system components and mission requirements affect the sizing of a solar powered long endurance aircraft. The aircraft power system consists of photovoltaic cells and a regenerative fuel cell. Various characteristics of these components, such as PV cell type, PV cell mass, PV cell efficiency, fuel cell efficiency, and fuel cell specific mass, were varied to determine what effect they had on the aircraft sizing for a given mission. Mission parameters, such as time of year, flight altitude, flight latitude, and payload mass and power, were also altered to determine how mission constraints affect the aircraft sizing. An aircraft analysis method which determines the aircraft configuration, aspect ratio, wing area, and total mass, for maximum endurance or minimum required power based on the stated power system and mission parameters is presented. The results indicate that, for the power system, the greatest benefit can be gained by increasing the fuel cell specific energy. Mission requirements also substantially affect the aircraft size. By limiting the time of year the aircraft is required to fly at high northern or southern latitudes, a significant reduction in aircraft size or increase in payload capacity can be achieved.
FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Fixed Length). Volume 8.
1997-04-01
DATA ELE- MENTS. SEGMENT R MAY BE REPEATED A MAXIMUM OF THREE (3) TIMES IN ORDER TO ACQUIRE THE REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO...preceding record. Marketing input DICs. QI Next DRN of appropriate segment will be QF The assigned NSN or PSCN being can- reflected in accordance with Table...Classified KFC Notification of Possible Duplicate (Sub- KRP Characteristics Data mitter) Follow-Up Interrogation LFU Notification of Return, SSR Transaction
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
NASA Astrophysics Data System (ADS)
Berisford, D. F.; Painter, T. H.; Richardson, M.; Wallach, A.; Deems, J. S.; Bormann, K. J.
2017-12-01
The Airborne Snow Observatory (ASO - http://aso.jpl.nasa.gov) uses an airborne laser scanner to map snow depth, and imaging spectroscopy to map snow albedo in order to estimate snow water equivalent and melt rate over mountainous, hydrologic basin-scale areas. Optimization of planned flight lines requires the balancing of many competing factors, including flying altitude and speed, bank angle limitation, laser pulse rate and power level, flightline orientation relative to terrain, surface optical properties, and data output requirements. These variables generally distill down to cost vs. higher resolution data. The large terrain elevation variation encountered in mountainous terrain introduces the challenge of narrow swath widths over the ridgetops, which drive tight flightline spacing and possible dropouts over the valleys due to maximum laser range. Many of the basins flown by ASO exceed 3,000m of elevation relief, exacerbating this problem. Additionally, sun angle may drive flightline orientations for higher-quality spectrometer data, which may change depending on time of day. Here we present data from several ASO missions, both operational and experimental, showing the lidar performance and accuracy limitations for a variety of operating parameters. We also discuss flightline planning strategies to maximize data density return per dollar, and a brief analysis on the effect of short turn times/steep bank angles on GPS position accuracy.
Biological reduction of chlorinated solvents: Batch-scale geochemical modeling
NASA Astrophysics Data System (ADS)
Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.
2010-09-01
Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is able to provide valuable insight into the fundamental processes and their complex interactions during bioremediation of chlorinated ethenes in DNAPL source zones.
Simulation of water-table aquifers using specified saturated thickness
Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.
2014-01-01
Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
Langelotz, C; Koplin, G; Pascher, A; Lohmann, R; Köhler, A; Pratschke, J; Haase, O
2017-12-01
Background Between the conflicting requirements of clinic organisation, the European Working Time Directive, patient safety, an increasing lack of junior staff, and competitiveness, the development of ideal duty hour models is vital to ensure maximum quality of care within the legal requirements. To achieve this, it is useful to evaluate the actual effects of duty hour models on staff satisfaction. Materials and Methods After the traditional 24-hour duty shift was given up in a surgical maximum care centre in 2007, an 18-hour duty shift was implemented, followed by a 12-hour shift in 2008, to improve handovers and reduce loss of information. The effects on work organisation, quality of life and salary were analysed in an anonymous survey in 2008. The staff survey was repeated in 2014. Results With a response rate of 95% of questionnaires in 2008 and a 93% response rate in 2014, the 12-hour duty model received negative ratings due to its high duty frequency and subsequent social strain. Also the physical strain and chronic tiredness were rated as most severe in the 12-hour rota. The 18-hour duty shift was the model of choice amongst staff. The 24-hour duty model was rated as the best compromise between the requirements of work organisation and staff satisfaction, and therefore this duty model was adapted accordingly in 2015. Conclusion The essential basis of a surgical department is a duty hour model suited to the requirements of work organisation, the Working Time Directive and the needs of the surgical staff. A 12-hour duty model can be ideal for work organisation, but only if augmented with an adequate number of staff members, the implementation of this model is possible without the frequency of 12-hour shifts being too high associated with strain on surgical staff and a perceived deterioration of quality of life. A staff survey should be performed on a regular basis to assess the actual effects of duty hour models and enable further optimisation. The much criticised 24-hour duty model seems to be much better than its reputation, if augmented by additional staff members in the evening hours. Georg Thieme Verlag KG Stuttgart · New York.
Maximum Entropy Method applied to Real-time Time-Dependent Density Functional Theory
NASA Astrophysics Data System (ADS)
Zempo, Yasunari; Toogoshi, Mitsuki; Kano, Satoru S.
Maximum Entropy Method (MEM) is widely used for the analysis of a time-series data such as an earthquake, which has fairly long-periodicity but short observable data. We have examined MEM to apply to the optical analysis of the time-series data from the real-time TDDFT. In the analysis, usually Fourier Transform (FT) is used, and we have to pay our attention to the lower energy part such as the band gap, which requires the long time evolution. The computational cost naturally becomes quite expensive. Since MEM is based on the autocorrelation of the signal, in which the periodicity can be described as the difference of time-lags, its value in the lower energy naturally gets small compared to that in the higher energy. To improve the difficulty, our MEM has the two features: the raw data is repeated it many times and concatenated, which provides the lower energy resolution in high resolution; together with the repeated data, an appropriate phase for the target frequency is introduced to reduce the side effect of the artificial periodicity. We have compared our improved MEM and FT spectrum using small-to-medium size molecules. We can see the clear spectrum of MEM, compared to that of FT. Our new technique provides higher resolution in fewer steps, compared to that of FT. This work was partially supported by JSPS Grants-in-Aid for Scientific Research (C) Grant number 16K05047, Sumitomo Chemical, Co. Ltd., and Simulatio Corp.
Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N
2012-12-01
Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng
2017-04-01
A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.
49 CFR 398.6 - Hours of service of drivers; maximum driving time.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Hours of service of drivers; maximum driving time... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No... or operate for more than 10 hours in the aggregate (excluding rest stops and stops for meals) in any...
49 CFR 398.6 - Hours of service of drivers; maximum driving time.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Hours of service of drivers; maximum driving time... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No... or operate for more than 10 hours in the aggregate (excluding rest stops and stops for meals) in any...
Maximum thrust mode evaluation
NASA Technical Reports Server (NTRS)
Orme, John S.; Nobbs, Steven G.
1995-01-01
Measured reductions in acceleration times which resulted from the application of the F-15 performance seeking control (PSC) maximum thrust mode during the dual-engine test phase is presented as a function of power setting and flight condition. Data were collected at altitudes of 30,000 and 45,000 feet at military and maximum afterburning power settings. The time savings for the supersonic acceleration is less than at subsonic Mach numbers because of the increased modeling and control complexity. In addition, the propulsion system was designed to be optimized at the mid supersonic Mach number range. Recall that even though the engine is at maximum afterburner, PSC does not trim the afterburner for the maximum thrust mode. Subsonically at military power, time to accelerate from Mach 0.6 to 0.95 was cut by between 6 and 8 percent with a single engine application of PSC, and over 14 percent when both engines were optimized. At maximum afterburner, the level of thrust increases were similar in magnitude to the military power results, but because of higher thrust levels at maximum afterburner and higher aircraft drag at supersonic Mach numbers the percentage thrust increase and time to accelerate was less than for the supersonic accelerations. Savings in time to accelerate supersonically at maximum afterburner ranged from 4 to 7 percent. In general, the maximum thrust mode has performed well, demonstrating significant thrust increases at military and maximum afterburner power. Increases of up to 15 percent at typical combat-type flight conditions were identified. Thrust increases of this magnitude could be useful in a combat situation.
75 FR 58505 - Regulation Z; Truth in Lending
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-24
... requirement applicable to higher-priced mortgage loans, for loans that exceed the maximum principal balance.... 1639D). For loans that exceed the Freddie Mac maximum principal balance, TILA Section 129D provides that...)). The current maximum principal balance for a mortgage loan to be eligible for purchase by Freddie Mac...
OPENMED: A facility for biomedical experiments based on the CERN Low Energy Ion Ring (LEIR)
NASA Astrophysics Data System (ADS)
Carli, Christian
At present protons and carbon ions are in clinical use for hadron therapy at a growing number of treatment centers all over the world. Nevertheless, only limited direct clinical evidence of their superiority over other forms of radiotherapy is available [1]. Furthermore fundamental studies on biological effects of hadron beams have been carried out at different times (some a long time ago) in different laboratories and under different conditions. Despite an increased availability of ion beams for hadron therapy, beam time for preclinical studies is expected to remain insufficient as the priority for therapy centers is to treat the maximum number of patients. Most of the remaining beam time is expected to be required for setting up and measurements to guarantee appropriate good quality beams for treatments. The proposed facility for biomedical research [2] in support of hadron therapy centers would provide ion beams for interested research groups and allow them to carry out basic studies under well defined conditions. Typical studies would include radiobiological phenomena like relative biological effectiveness with different energies, ion species, and intensities. Furthermore possible studies include the development of advanced dosimetry in heterogeneous materials that resemble the human body, imaging techniques and, at a later stage, when the maximum energy with the LEIR magnets can be reached, fragmentation.
Service offerings and interfaces for the ACTS network of Earth stations
NASA Technical Reports Server (NTRS)
Coney, Thom A.
1988-01-01
The Advanced Communications Satellite (ACTS) is capable of two modes of communication. Mode 1 is a mesh network of Earth stations using baseband-switched, time-division multiple-access (BBS-TDMA) and hopping beams. Mode 2 is a mesh network using satellite-switched, time-division multiple-access (SS-TDMA) and fixed (or hopping) beams. The purpose of this paper is to present the functional requirements and the design of the ACTS Mode 1 Earth station terrestrial interface. Included among the requirements are that: (1) the interface support standard telecommunications service offerings (i.e., voice, video and data at rates ranging from 9.6 kbps to 44 Mbps); (2) the interface support the unique design characteristics of the ACTS communications systems (e.g., the real time demand assignment of satellite capacity); and (3) the interface support test hardware capable of validating ACTS communications processes. The resulting interface design makes use of an appropriate combination of T1 or T3 multiplexers and a small central office (maximum capacity 56 subscriber lines per unit).
Ermer, James; Corcoran, Mary; Lasseter, Kenneth; Marbury, Thomas; Yan, Brian
2016-01-01
Background: Lisdexamfetamine (LDX) and d-amphetamine pharmacokinetics were assessed in individuals with normal and impaired renal function after a single LDX dose; LDX and d-amphetamine dialyzability was also examined. Methods: Adults (N = 40; 8/group) were enrolled in 1 of 5 renal function groups [normal function, mild impairment, moderate impairment, severe impairment/end-stage renal disease (ESRD) not requiring hemodialysis, and ESRD requiring hemodialysis] as estimated by glomerular filtration rate (GFR). Participants with normal and mild to severe renal impairment received 30 mg LDX; blood samples were collected predose and serially for 96 hours. Participants with ESRD requiring hemodialysis received 30 mg LDX predialysis and postdialysis separated by a washout period of 7–14 days. Predialysis blood samples were collected predose, serially for 72 hours, and from the dialyzer during hemodialysis; postdialysis blood samples were collected predose and serially for 48 hours. Pharmacokinetic end points included maximum plasma concentration (Cmax) and area under the plasma concentration versus time curve from time 0 to infinity (AUC0–∞) or to last assessment (AUClast). Results: Mean LDX Cmax, AUClast, and AUC0–∞ in participants with mild to severe renal impairment did not differ from those with normal renal function; participants with ESRD had higher mean Cmax and AUClast than those with normal renal function. d-amphetamine exposure (AUClast and AUC0–∞) increased and Cmax decreased as renal impairment increased. Almost no LDX and little d-amphetamine were recovered in the dialyzate. Conclusions: There seems to be prolonged d-amphetamine exposure after 30 mg LDX as renal impairment increases. In individuals with severe renal impairment (GFR: 15 ≤ 30 mL·min−1·1.73 m−2), the maximum LDX dose is 50 mg/d; in patients with ESRD (GFR: <15 mL·min−1·1.73 m−2), the maximum LDX dose is 30 mg/d. Neither LDX nor d-amphetamine is dialyzable. PMID:26926668
The physiology of mountain biking.
Impellizzeri, Franco M; Marcora, Samuele M
2007-01-01
Mountain biking is a popular outdoor recreational activity and an Olympic sport. Cross-country circuit races have a winning time of approximately equal 120 minutes and are performed at an average heart rate close to 90% of the maximum, corresponding to 84% of maximum oxygen uptake (VO2max). More than 80% of race time is spent above the lactate threshold. This very high exercise intensity is related to the fast starting phase of the race; the several climbs, forcing off-road cyclists to expend most of their effort going against gravity; greater rolling resistance; and the isometric contractions of arm and leg muscles necessary for bike handling and stabilisation. Because of the high power output (up to 500W) required during steep climbing and at the start of the race, anaerobic energy metabolism is also likely to be a factor of off-road cycling and deserves further investigation. Mountain bikers' physiological characteristics indicate that aerobic power (VO2max >70 mL/kg/min) and the ability to sustain high work rates for prolonged periods of time are prerequisites for competing at a high level in off-road cycling events. The anthropometric characteristics of mountain bikers are similar to climbers and all-terrain road cyclists. Various parameters of aerobic fitness are correlated to cross-country performance, suggesting that these tests are valid for the physiological assessment of competitive mountain bikers, especially when normalised to body mass. Factors other than aerobic power and capacity might influence off-road cycling performance and require further investigation. These include off-road cycling economy, anaerobic power and capacity, technical ability and pre-exercise nutritional strategies.
Event-driven time-optimal control for a class of discontinuous bioreactors.
Moreno, Jaime A; Betancur, Manuel J; Buitrón, Germán; Moreno-Andrade, Iván
2006-07-05
Discontinuous bioreactors may be further optimized for processing inhibitory substrates using a convenient fed-batch mode. To do so the filling rate must be controlled in such a way as to push the reaction rate to its maximum value, by increasing the substrate concentration just up to the point where inhibition begins. However, an exact optimal controller requires measuring several variables (e.g., substrate concentrations in the feed and in the tank) and also good model knowledge (e.g., yield and kinetic parameters), requirements rarely satisfied in real applications. An environmentally important case, that exemplifies all these handicaps, is toxicant wastewater treatment. There the lack of online practical pollutant sensors may allow unforeseen high shock loads to be fed to the bioreactor, causing biomass inhibition that slows down the treatment process and, in extreme cases, even renders the biological process useless. In this work an event-driven time-optimal control (ED-TOC) is proposed to circumvent these limitations. We show how to detect a "there is inhibition" event by using some computable function of the available measurements. This event drives the ED-TOC to stop the filling. Later, by detecting the symmetric event, "there is no inhibition," the ED-TOC may restart the filling. A fill-react cycling then maintains the process safely hovering near its maximum reaction rate, allowing a robust and practically time-optimal operation of the bioreactor. An experimental study case of a wastewater treatment process application is presented. There the dissolved oxygen concentration was used to detect the events needed to drive the controller. (c) 2006 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Main, Ian; Irving, Duncan; Musson, Roger; Reading, Anya
1999-05-01
Earthquake populations have recently been shown to have many similarities with critical-point phenomena, with fractal scaling of source sizes (energy or seismic moment) corresponding to the observed Gutenberg-Richter (G-R) frequency-magnitude law holding at low magnitudes. At high magnitudes, the form of the distribution depends on the seismic moment release rate Msolar and the maximum magnitude m_max . The G-R law requires a sharp truncation at an absolute maximum magnitude for finite Msolar. In contrast, the gamma distribution has an exponential tail which allows a soft or `credible' maximum to be determined by negligible contribution to the total seismic moment release. Here we apply both distributions to seismic hazard in the mainland UK and its immediate continental shelf, constrained by a mixture of instrumental, historical and neotectonic data. Tectonic moment release rates for the seismogenic part of the lithosphere are calculated from a flexural-plate model for glacio-isostatic recovery, constrained by vertical deformation rates from tide-gauge and geomorphological data. Earthquake focal mechanisms in the UK show near-vertical strike-slip faulting, with implied directions of maximum compressive stress approximately in the NNW-SSE direction, consistent with the tectonic model. Maximum magnitudes are found to be in the range 6.3-7.5 for the G-R law, or 7.0-8.2 m_L for the gamma distribution, which compare with a maximum observed in the time period of interest of 6.1 m_L . The upper bounds are conservative estimates, based on 100 per cent seismic release of the observed vertical neotectonic deformation. Glacio-isostatic recovery is predominantly an elastic rather than a seismic process, so the true value of m_max is likely to be nearer the lower end of the quoted range.
NASA Astrophysics Data System (ADS)
Alsing, Justin; Silva, Hector O.; Berti, Emanuele
2018-04-01
We infer the mass distribution of neutron stars in binary systems using a flexible Gaussian mixture model and use Bayesian model selection to explore evidence for multi-modality and a sharp cut-off in the mass distribution. We find overwhelming evidence for a bimodal distribution, in agreement with previous literature, and report for the first time positive evidence for a sharp cut-off at a maximum neutron star mass. We measure the maximum mass to be 2.0M⊙ < mmax < 2.2M⊙ (68%), 2.0M⊙ < mmax < 2.6M⊙ (90%), and evidence for a cut-off is robust against the choice of model for the mass distribution and to removing the most extreme (highest mass) neutron stars from the dataset. If this sharp cut-off is interpreted as the maximum stable neutron star mass allowed by the equation of state of dense matter, our measurement puts constraints on the equation of state. For a set of realistic equations of state that support >2M⊙ neutron stars, our inference of mmax is able to distinguish between models at odds ratios of up to 12: 1, whilst under a flexible piecewise polytropic equation of state model our maximum mass measurement improves constraints on the pressure at 3 - 7 × the nuclear saturation density by ˜30 - 50% compared to simply requiring mmax > 2M⊙. We obtain a lower bound on the maximum sound speed attained inside the neutron star of c_s^max > 0.63c (99.8%), ruling out c_s^max < c/√{3} at high significance. Our constraints on the maximum neutron star mass strengthen the case for neutron star-neutron star mergers as the primary source of short gamma-ray bursts.
NASA Technical Reports Server (NTRS)
Cook, Harvey A; Heinicke, Orville H; Haynie, William H
1947-01-01
An investigation was conducted on a full-scale air-cooled cylinder in order to establish an effective means of maintaining maximum-economy spark timing with varying engine operating conditions. Variable fuel-air-ratio runs were conducted in which relations were determined between the spark travel, and cylinder-pressure rise. An instrument for controlling spark timing was developed that automatically maintained maximum-economy spark timing with varying engine operating conditions. The instrument also indicated the occurrence of preignition.
Wind-influenced projectile motion
NASA Astrophysics Data System (ADS)
Bernardo, Reginald Christian; Perico Esguerra, Jose; Day Vallejos, Jazmine; Jerard Canda, Jeff
2015-03-01
We solved the wind-influenced projectile motion problem with the same initial and final heights and obtained exact analytical expressions for the shape of the trajectory, range, maximum height, time of flight, time of ascent, and time of descent with the help of the Lambert W function. It turns out that the range and maximum horizontal displacement are not always equal. When launched at a critical angle, the projectile will return to its starting position. It turns out that a launch angle of 90° maximizes the time of flight, time of ascent, time of descent, and maximum height and that the launch angle corresponding to maximum range can be obtained by solving a transcendental equation. Finally, we expressed in a parametric equation the locus of points corresponding to maximum heights for projectiles launched from the ground with the same initial speed in all directions. We used the results to estimate how much a moderate wind can modify a golf ball’s range and suggested other possible applications.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
NASA Technical Reports Server (NTRS)
Wheeler, Raymond M.; Tibbitts, Theodore W.
1987-01-01
Efficient crop production for controlled ecological life support systems requires near-optimal growing conditions with harvests taken when production per unit area per unit time is maximum. This maximum for potato was determined using data on Norland plants which were grown in walk-in growth rooms under 12-h and 24-h photoperiods at 16 C. Results show that high tuber production can be obtained from potatoes grown under a continuous light regime. The dry weights (dwt) of tuber and of the entire plants were found to increase under both photoperiods until the final harvest date (148 days), reaching 5732 g tuber dwt and 704 g total dwt under 12-h, and 791 g tuber dwt and 972 g total dwt under 24-h.
Picosecond and femtosecond lasers for industrial material processing
NASA Astrophysics Data System (ADS)
Mayerhofer, R.; Serbin, J.; Deeg, F. W.
2016-03-01
Cold laser materials processing using ultra short pulsed lasers has become one of the most promising new technologies for high-precision cutting, ablation, drilling and marking of almost all types of material, without causing unwanted thermal damage to the part. These characteristics have opened up new application areas and materials for laser processing, allowing previously impossible features to be created and also reducing the amount of post-processing required to an absolute minimum, saving time and cost. However, short pulse widths are only one part of thee story for industrial manufacturing processes which focus on total costs and maximum productivity and production yield. Like every other production tool, ultra-short pulse lasers have too provide high quality results with maximum reliability. Robustness and global on-site support are vital factors, as well ass easy system integration.
Growth of Salmonella on chilled meat.
Mackey, B. M.; Roberts, T. A.; Mansfield, J.; Farkas, G.
1980-01-01
Growth rates of a mixture of Salmonella serotypes inoculated on beef from a commercial abattoir were measured at chill temperatures. The minimum recorded mean generation times were 8.1 h at 10 degrees C; 5.2 h at 12.5 degrees C and 2.9 h at 15 degrees C. Growth did not occur at 7-8 degrees C. From these data the maximum extent of growth of Salmonella during storage of meat for different times at chill temperatures was calculated. Criteria for deciding safe handling temperatures for meat are discussed. Maintaining an internal temperature below 10 degrees C during the boning operation would be sufficient to safeguard public health requirements. PMID:7052227
[Biological properties of bacteriophages, active to Yersinia enterocolitica].
Darsavelidze, M A; Kapanadze, Zh S; Chanishvili, T G
2004-01-01
The biological properties of 16 clones of Y. enterolitica bacteriophages were tested to select the most active for subsequent use. For the first time Y. enterocolitica virulent phages belonging to the family of Podoviridae were described and 7 serological groups of phages with no cross reactions were registered. The technology for the production of new therapeutic and prophylactic Y. enterocolitica polyvalent bacteriophage under laboratory conditions was developed. The effective multiplicity of contamination ensuring the maximum release of phages from bacterial cells, the optimum incubation temperature and the time of exposure were established. The experimental batches of therapeutic and prophylactic Y. enterocolitica polyvalent bacteriophage thus obtained met the requirements for antibacterial preparations.
Comparison of empirical strategies to maximize GENEHUNTER lod scores.
Chen, C H; Finch, S J; Mendell, N R; Gordon, D
1999-01-01
We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.
Advanced Supersonic Technology concept AST-100 characteristics developed in a baseline-update study
NASA Technical Reports Server (NTRS)
Baber, H. T., Jr.; Swanson, E. E.
1976-01-01
The advanced supersonic technology configuration, AST-100, is described. The combination of wing thickness reduction, nacelle recontouring for minimum drag at cruise, and the use of the horizontal tail to produce lift during climb and cruise resulted in an increase in maximum lift-to-drag ratio. Lighter engines and lower fuel weight associated with this resizing result in a six percent reduction in takeoff gross weight. The AST-100 takeoff maximum effective perceived noise at the runway centerline and sideline measurement stations was 114.4 decibels. Since 1.5-decibels tradeoff is available from the approach noise, the required engine noise supression is 4.9 decibels. The AST-100 largest maximum overpressure would occur during transonic climb acceleration when the aircraft was at relatively low altitude. Calculated standard +8 C day range of the AST-100, with a 292 passenger payload, is 7348 km (3968 n.mi). Fuel price is the largest contributor to direct operating cost. However, if the AST-100 were flown subsonically (M = 0.9), direct operating costs would increase approximately 50 percent because of time related costs.
Parametric study of the Orbiter rollout using an approximate solution
NASA Technical Reports Server (NTRS)
Garland, B. J.
1979-01-01
An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.
FPGA-based architecture for real-time data reduction of ultrasound signals.
Soto-Cajiga, J A; Pedraza-Ortega, J C; Rubio-Gonzalez, C; Bandala-Sanchez, M; Romero-Troncoso, R de J
2012-02-01
This paper describes a novel method for on-line real-time data reduction of radiofrequency (RF) ultrasound signals. The approach is based on a field programmable gate array (FPGA) system intended mainly for steel thickness measurements. Ultrasound data reduction is desirable when: (1) direct measurements performed by an operator are not accessible; (2) it is required to store a considerable amount of data; (3) the application requires measuring at very high speeds; and (4) the physical space for the embedded hardware is limited. All the aforementioned scenarios can be present in applications such as pipeline inspection where data reduction is traditionally performed on-line using pipeline inspection gauges (PIG). The method proposed in this work consists of identifying and storing in real-time only the time of occurrence (TOO) and the maximum amplitude of each echo present in a given RF ultrasound signal. The method is tested with a dedicated immersion system where a significant data reduction with an average of 96.5% is achieved. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Meng, X. T.; Levin, D. S.; Chapman, J. W.; Li, D. C.; Yao, Z. E.; Zhou, B.
2017-02-01
The High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by the CERN Microelectronics group, has been proposed for the digitization of the thin-Resistive Plate Chambers (tRPC) in the ATLAS Muon Spectrometer Phase-1 upgrade project. These chambers, to be staged for higher luminosity LHC operation, will increase trigger acceptance and reduce or eliminate the fake muon trigger rates in the barrel-endcap transition region, corresponding to pseudo-rapidity range 1<|η|<1.3. Low level trigger candidates must be flagged within a maximum latency of 1075 ns, thus imposing stringent signal processing time performance requirements on the readout system in general, and on the digitization electronics in particular. This paper investigates the HPTDC signal latency performance based on a specially designed evaluation board coupled with an external FPGA evaluation board, when operated in triggerless mode, and under hit rate conditions expected in Phase-I. This hardware based study confirms previous simulations and demonstrates that the HPTDC in triggerless operation satisfies the digitization timing requirements in both leading edge and pair modes.
Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley
2011-05-01
Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.
NASA Technical Reports Server (NTRS)
Rebeske, John J , Jr; Rohlik, Harold E
1953-01-01
An analytical investigation was made to determine from component performance characteristics the effect of air bleed at the compressor outlet on the acceleration characteristics of a typical high-pressure-ratio single-spool turbojet engine. Consideration of several operating lines on the compressor performance map with two turbine-inlet temperatures showed that for a minimum acceleration time the turbine-inlet temperature should be the maximum allowable, and the operating line on the compressor map should be as close to the surge region as possible throughout the speed range. Operation along such a line would require a continuously varying bleed area. A relatively simple two-step area bleed gives only a small increase in acceleration time over a corresponding variable-area bleed. For the modes of operation considered, over 84 percent of the total acceleration time was required to accelerate through the low-speed range ; therefore, better low-speed compressor performance (higher pressure ratios and efficiencies) would give a significant reduction in acceleration time.
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
NASA Technical Reports Server (NTRS)
Stubbs, S. M.; Tanner, J. A.
1976-01-01
During maximum braking the average ratio of drag-force friction coefficient developed by the antiskid system to maximum drag-force friction coefficient available at the tire/runway interface was higher on dry surfaces than on wet surfaces. The gross stopping power generated by the brake system on the dry surface was more than twice that obtained on the wet surfaces. With maximum braking applied, the average ratio of side-force friction coefficient developed by the tire under antiskid control to maximum side-force friction available at the tire/runway interface of a free-rolling yawed tire was shown to decrease with increasing yaw angle. Braking reduced the side-force friction coefficient on a dry surface by 75 percent as the wheel slip ratio was increased to 0.3; on a flooded surface the coefficient dropped to near zero for the same slip ratio. Locked wheel skids were observed when the tire encountered a runway surface transition from dry to flooded, due in part to the response time required for the system to sense abrupt changes in the runway friction; however, the antiskid system quickly responded by reducing brake pressure and cycling normally during the remainder of the run on the flooded surface.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
Shedding of Soluble Glycoprotein 1 Detected During Acute Lassa Virus Infection in Human Subjects
2010-11-09
12701 - 12705. 29. Urata S, Noda T , Kawaoka Y, Yokosawa H, Yasuda J: Cellular factors required for Lassa virus budding. J Virol 2006, (8):4191 - 4195...and exposed to HyBlot CL Film (Denville Scientific, Inc). Blots used in reprobing experiments were briefly rinsed in PBS- T (1X PBS, pH 7.4, 0.1...Blots were then washed extensively in PBS- T , re-blocked, and reprobed as outlined above. Blots were reprobed a maximum of three times. - 16
Modeling operators' emergency response time for chemical processing operations.
Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam
2014-01-01
Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.
33 CFR Appendix A to Part 154 - Guidelines for Detonation Flame Arresters
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CG-522). 1. Scope 1.1This standard provides the minimum requirements for design, construction.../Circ. 373/Rev. 1—Revised Standards for the Design, Testing and Locating of Devices to Prevent the... maximum design pressure drop for that maximum flow rate. 6.1.10Maximum operating pressure. 7. Materials 7...
49 CFR 178.345-3 - Structural integrity.
Code of Federal Regulations, 2010 CFR
2010-10-01
... requirements and acceptance criteria. (1) The maximum calculated design stress at any point in the cargo tank wall may not exceed the maximum allowable stress value prescribed in Section VIII of the ASME Code (IBR... Code or the ASTM standard to which the material is manufactured. (3) The maximum design stress at any...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Increase in maximum allowable working pressure. 52.01-55 Section 52.01-55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Increase in maximum allowable working pressure. 52.01-55 Section 52.01-55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Increase in maximum allowable working pressure. 52.01-55 Section 52.01-55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Increase in maximum allowable working pressure. 52.01-55 Section 52.01-55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Increase in maximum allowable working pressure. 52.01-55 Section 52.01-55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When...
Patient-specific Distraction Regimen to Avoid Growth-rod Failure.
Agarwal, Aakash; Jayaswal, Arvind; Goel, Vijay K; Agarwal, Anand K
2018-02-15
A finite element study to establish the relationship between patient's curve flexibility (determined using curve correction under gravity) in juvenile idiopathic scoliosis and the required distraction frequency to avoid growth rod fracture, as a function of time. To perform a parametric analysis using a juvenile scoliotic spine model (single mid-thoracic curve with the apex at the eighth thoracic vertebra) and establish the relationship between curve flexibility (determined using curve correction under gravity) and the distraction interval that allows a higher factor of safety for the growth rods. Previous studies have shown that frequent distraction with smaller magnitude of distractions are less likely to result in rod failure. However there has not been any methodology or a chart provided to apply this knowledge on to the individual patients that undergo the treatment. This study aims to fill in that gap. The parametric study was performed by varying the material properties of the disc, hence altering the axial stiffness of the scoliotic spine model. The stresses on the rod were found to increase with increased axial stiffness of the spine, and this resulted in the increase of required optimal frequency to achieve a factor of safety of two for growth rods. A relationship between the percentage correction in Cobb's angle due to gravity alone, and the required distraction interval for limiting the maximum von Mises stress to 255 MPa on the growth rods was established. The distraction interval required to limit the stresses to the selected nominal value reduces with increase in stiffness of the spine. Furthermore, the appropriate distraction interval reduces for each model as the spine becomes stiffer with time (autofusion). This points to the fact the optimal distraction frequency is a time-dependent variable that must be achieved to keep the maximum von Mises stress under the specified factor of safety. The current study demonstrates the possibility of translating fundamental information from finite element modeling to the clinical arena, for mitigating the occurrence of growth rod fracture, that is, establishing a relationship between optimal distraction interval and curve flexibility (determined using curve correction under gravity). N/A.
Toler, Julianne D; Petschauer, Meredith A; Mihalik, Jason P; Oyama, Sakiko; Halverson, S Doug; Guskiewicz, Kevin M
2010-03-01
To determine how head movement and time to access airway were affected by 3 emergency airway access techniques used in American football. Prospective counterbalanced design. University research laboratory. Eighteen certified athletic trainers (ATCs) and 18 noncertified students (NCSs). Each participant performed 1 trial of each of the 3 after airway access techniques: quick release mechanism (QRM), cordless screwdriver (CSD), and pocket mask insertion (PMI). Time to task completion in seconds, head movement in each plane (sagittal, frontal, and transverse), maximum head movement in each plane, helmet movement in each plane, and maximum helmet movement in each plane. We observed a significant difference between all 3 techniques with respect to time required to achieve airway access (F(2,68) = 263.88; P < 0.001). The PMI allowed for the quickest access followed by the QRM and CSD techniques, respectively. The PMI technique also resulted in significantly less head movement (F(2,68) = 9.06; P = 0.001) and less maximum head movement (F(2,68) = 13.84; P < 0.001) in the frontal plane compared with the QRM and CSD techniques. The PMI technique should be used to gain rapid airway access when managing a football athlete experiencing respiratory arrest in the presence of a suspected cervical spine injury. In the event the athlete does not present with respiratory arrest, the facemask may be removed carefully with a pocket mask ready. Medical professionals must be familiar with differences in equipment and the effects these may have on the management of the spine-injured athlete.
Kuwayama, Kenji; Miyaguchi, Hajime; Iwata, Yuko T; Kanamori, Tatsuyuki; Tsujikawa, Kenji; Yamamuro, Tadashi; Segawa, Hiroki; Inoue, Hiroyuki
2017-04-01
Hair and nails are often used to prove long-term intake of drugs in forensic drug testing. The aim of this study was to evaluate the effectiveness of drug testing using hair and nails and the feasibility of determining when drugs were ingested by measuring the time-courses of drug concentrations in hair and toenails after single administrations of various drugs. Healthy subjects ingested four pharmaceutical products containing eight active ingredients in single doses. Hair and toenails were collected at predetermined intervals, and drug concentrations in hair and nails were measured for 12 months. The administered drugs and their main metabolites were extracted using micropulverized extraction with a stainless steel bullet and were analyzed using liquid chromatography/tandem mass spectrometry. Acidic compounds such as ibuprofen and its metabolites were not detected in both specimens. Acetaminophen, a weakly acidic compound, was detected in nails more frequently than in hair. The maximum concentration of allyl isopropyl acetylurea, a neutral compound, in nails was significantly higher than in hair. Nails are an effective specimen to detect neutral and weakly acidic compounds. For fexofenadine, a zwitterionic compound, and for most basic compounds, the maximum concentrations in hair segments tended to be higher than those in nails. The hair segments showing the maximum concentrations varied between drugs, samples, and subjects. Drug concentrations in hair segments greatly depended on the selection of the hair. Careful interpretation of analytical results is required to predict the time of drug intake. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Initial operation of the NSTX-Upgrade real-time velocity diagnostic
Podestà, M.; Bell, R. E.
2016-11-03
A real-time velocity (RTV) diagnostic based on active charge-exchange recombination spectroscopy is now operational on the National Spherical Torus Experiment-Upgrade (NSTX-U) spherical torus (Menard et al 2012 Nucl. Fusion 52 083015). We designed the system in order to supply plasma velocity data in real time to the NSTX-U plasma control system, as required for the implementation of toroidal rotation control. Our measurements are available from four radii at a maximum sampling frequency of 5 kHz. Post-discharge analysis of RTV data provides additional information on ion temperature, toroidal velocity and density of carbon impurities. Furthermore, examples of physics studies enabled bymore » RTV measurements from initial operations of NSTX-U are discussed.« less
Gasohol Quality Control for Real Time Applications by Means of a Multimode Interference Fiber Sensor
Rodríguez Rodríguez, Adolfo J.; Baldovino-Pantaleón, Oscar; Domínguez Cruz, Rene F.; Zamarreño, Carlos R.; Matías, Ignacio R.; May-Arrioja, Daniel A.
2014-01-01
In this work we demonstrate efficient quality control of a variety of gasoline and ethanol (gasohol) blends using a multimode interference (MMI) fiber sensor. The operational principle relies on the fact that the addition of ethanol to the gasohol blend reduces the refractive index (RI) of the gasoline. Since MMI sensors are capable of detecting small RI changes, the ethanol content of the gasohol blend is easily determined by tracking the MMI peak wavelength response. Gasohol blends with ethanol contents ranging from 0% to 50% has been clearly identified using this device, which provides a linear response with a maximum sensitivity of 0.270 nm/% EtOH. The sensor can also distinguish when water incorporated in the blend has exceeded the maximum volume tolerated by the gasohol blend, which is responsible for phase separation of the ethanol and gasoline and could cause serious engine failures. Since the MMI sensor is straightforward to fabricate and does not require any special coating it is a cost effective solution for real time and in-situ monitoring of the quality of gasohol blends. PMID:25256111
Overload characteristics of paper-polypropylene-paper cable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, A.
1990-09-01
The short-time rating of PPP pipe-type cable may be lower than the equivalent paper cable sized to carry the same normal load. The ratings depend on the relative conductor sizes and the maximum allowable conductor temperatures of the insulation. The insulation thermal resistivity may be a significant parameter for overload times of approximately one hour and should be verified for PPP insulation. The thermal capacitance temperature characteristic of PPP insulation is not known. However, the overload ratings are not very sensitive to this parameter. Overload ratings are given for maximum conductor temperatures from 105 C to 130 C. Use ofmore » ratings based on temperatures greater than 105 C would require testing to determine the extent of degradation of the insulation at these higher temperatures. PPP-insulated cable will be thermally stable over a wider range of operating conditions (voltage and current) compared with paper-insulated cable. The short-circuit ratings of PPP- and paper-insulated cable systems and the positive/negative and zero sequence impedances are compared. 21 refs., 22 figs., 5 tabs.« less
NASA Astrophysics Data System (ADS)
Arfani, Nurfitri; Nur, Fatmawati; Hafsan, Azrianingsih, Rodiyati
2017-05-01
Bacteriocin is a peptide that is easily degraded by proteolytic enzymes in the digestive systems of animals, including humans. It has antimicrobial activity against pathogenic bacteria. Lactobacillus sp. is one type of lactic acid bacteria (LAB) that occupies the intestines of ducks (Anas domesticus L.). The purpose of this research was to determine the optimum time of the highest protein production by Lactobacillus sp. and to determine inhibitory activity of bacteriocin against pathogenic bacteria (Escherichia coli and Staphylococcus aureus). Using the Bradford method, the results showed that the optimum time of highest bacteriocin production was after 36 hours of incubation, with a protein content of 0.93 mg/ml. The bacteriocin inhibitory activity against Escherichia coli showed that a protein concentration of 30% gave a maximum inhibition index of 1.1 mm, while for Staphylococcus aureus, a concentration of 70% gave a maximum inhibition index of 0.3 mm. Further research is required to determine the stationary state of bacteriocin production in this circumstance.
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Description and performance analysis of a generalized optimal algorithm for aerobraking guidance
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Dukeman, Greg A.
1993-01-01
A practical real-time guidance algorithm has been developed for aerobraking vehicles which nearly minimizes the maximum heating rate, the maximum structural loads, and the post-aeropass delta V requirement for orbit insertion. The algorithm is general and reusable in the sense that a minimum of assumptions are made, thus greatly reducing the number of parameters that must be determined prior to a given mission. A particularly interesting feature is that in-plane guidance performance is tuned by adjusting one mission-dependent, the bank margin; similarly, the out-of-plane guidance performance is tuned by adjusting a plane controller time constant. Other features of the algorithm are simplicity, efficiency and ease of use. The trimmed vehicle with bank angle modulation as the method of trajectory control. Performance of this guidance algorithm is examined by its use in an aerobraking testbed program. The performance inquiry extends to a wide range of entry speeds covering a number of potential mission applications. Favorable results have been obtained with a minimum of development effort, and directions for improvement of performance are indicated.
A comparison of the wavelet and short-time fourier transforms for Doppler spectral analysis.
Zhang, Yufeng; Guo, Zhenyu; Wang, Weilian; He, Side; Lee, Ting; Loew, Murray
2003-09-01
Doppler spectrum analysis provides a non-invasive means to measure blood flow velocity and to diagnose arterial occlusive disease. The time-frequency representation of the Doppler blood flow signal is normally computed by using the short-time Fourier transform (STFT). This transform requires stationarity of the signal during a finite time interval, and thus imposes some constraints on the representation estimate. In addition, the STFT has a fixed time-frequency window, making it inaccurate to analyze signals having relatively wide bandwidths that change rapidly with time. In the present study, wavelet transform (WT), having a flexible time-frequency window, was used to investigate its advantages and limitations for the analysis of the Doppler blood flow signal. Representations computed using the WT with a modified Morlet wavelet were investigated and compared with the theoretical representation and those computed using the STFT with a Gaussian window. The time and frequency resolutions of these two approaches were compared. Three indices, the normalized root-mean-squared errors of the minimum, the maximum and the mean frequency waveforms, were used to evaluate the performance of the WT. Results showed that the WT can not only be used as an alternative signal processing tool to the STFT for Doppler blood flow signals, but can also generate a time-frequency representation with better resolution than the STFT. In addition, the WT method can provide both satisfactory mean frequencies and maximum frequencies. This technique is expected to be useful for the analysis of Doppler blood flow signals to quantify arterial stenoses.
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
Senin, Pavel; Lin, Jessica; Wang, Xing; ...
2018-02-23
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senin, Pavel; Lin, Jessica; Wang, Xing
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
Analysis of error in TOMS total ozone as a function of orbit and attitude parameters
NASA Technical Reports Server (NTRS)
Gregg, W. W.; Ardanuy, P. E.; Braun, W. C.; Vallette, B. J.; Bhartia, P. K.; Ray, S. N.
1991-01-01
Computer simulations of orbital scenarios were performed to examine the effects of orbital altitude, equator crossing time, attitude uncertainty, and orbital eccentricity on ozone observations by future satellites. These effects were assessed by determining changes in solar and viewing geometry and earth daytime coverage loss. The importance of these changes on ozone retrieval was determined by simulating uncertainties in the TOMS ozone retrieval algorithm. The major findings are as follows: (1) Drift of equator crossing time from local noon would have the largest effect on the quality of ozone derived from TOMS. The most significant effect of this drift is the loss of earth daytime coverage in the winter hemisphere. The loss in coverage increases from 1 degree latitude for + or - 1 hour from noon, 6 degrees for + or - 3 hours from noon, to 53 degrees for + or - 6 hours from noon. An additional effect is the increase in ozone retrieval errors due to high solar zenith angles. (2) To maintain contiguous earth coverage, the maximum scan angle of the sensor must be increased with decreasing orbital altitude. The maximum scan angle required for full coverage at the equator varies from 60 degrees at 600 km altitude to 45 degrees at 1200 km. This produces an increase in spacecraft zenith angle, theta, which decreases the ozone retrieval accuracy. The range in theta was approximately 72 degrees for 600 km to approximately 57 degrees at 1200 km. (3) The effect of elliptical orbits is to create gaps in coverage along the subsatellite track. An elliptical orbit with a 200 km perigee and 1200 km apogee produced a maximum earth coverage gap of about 45 km at the perigee at nadir. (4) An attitude uncertainty of 0.1 degree in each axis (pitch, roll, yaw) produced a maximum scan angle to view the pole, and maximum solar zenith angle).
Nowak, Dennis A; Hermsdörfer, Joachim
2003-09-01
Persons with impaired manual sensibility frequently report problems to use the hand in manipulative tasks, such as using tools or buttoning a shirt. At least two control processes determine grip forces during voluntary object manipulation. Anticipatory force control specifies the motor commands on the basis of predictions about physical object properties and the consequences of our own actions. Feedback sensory information from the grasping digits, representing mechanical events at the skin-object interface, automatically modifies grip force according to the actual loading requirements and updates sensorimotor memories to support anticipatory grip force control. We investigated grip force control in nine patients with moderately impaired tactile sensibility of the grasping digits and in nine sex- and age-matched healthy controls lifting and holding an instrumented object. In healthy controls grip force was adequately scaled to the weight of the object to be lifted. The grip force was programmed to smoothly change in parallel with load force over the entire lifting movement. In particular, the grip force level was regulated in an economical way to be always slightly higher than the minimum required to prevent the object slipping. The temporal coupling between the grip and load force profiles achieved a high precision with the maximum grip and load forces coinciding closely in time. For the temporal regulation of the grip force profile patients with impaired tactile sensibility maintained the close co-ordination between proximal arm muscles, responsible for the lifting movement and the fingers stabilising the grasp. Maximum grip force coincided with maximum acceleration of the lifting movement. However, patients employed greater maximum grip forces and greater grip forces to hold the object unsupported when compared with controls. Our results give further evidence to the suggestion that during manipulation of objects with known physical properties the anticipatory temporal regulation of the grip force profile is centrally processed and less under sensory feedback control. In contrast, sensory afferent information from the grasping fingers plays a dominant role for the efficient scaling of the grip force level according to actual loading requirements.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
NASA Astrophysics Data System (ADS)
Carricart-Ganivet, J. P.; Vásquez-Bedoya, L. F.; Cabanillas-Terán, N.; Blanchon, P.
2013-09-01
Density banding in skeletons of reef-building corals is a valuable source of proxy environmental data. However, skeletal growth strategy has a significant impact on the apparent timing of density-band formation. Some corals employ a strategy where the tissue occupies previously formed skeleton during as the new band forms, which leads to differences between the actual and apparent band timing. To investigate this effect, we collected cores from female and male colonies of Siderastrea siderea and report tissue thicknesses and density-related growth parameters over a 17-yr interval. Correlating these results with monthly sea surface temperature (SST) shows that maximum skeletal density in the female coincides with low winter SSTs, whereas in the male, it coincides with high summer SSTs. Furthermore, maximum skeletal densities in the female coincide with peak Sr/Ca values, whereas in the male, they coincide with low Sr/Ca values. Both results indicate a 6-month difference in the apparent timing of density-band formation between genders. Examination of skeletal extension rates also show that the male has thicker tissue and extends faster, whereas the female has thinner tissue and a denser skeleton—but both calcify at the same rate. The correlation between extension and calcification, combined with the fact that density banding arises from thickening of the skeleton throughout the depth reached by the tissue layer, implies that S. siderea has the same growth strategy as massive Porites, investing its calcification resources into linear extension. In addition, differences in tissue thicknesses suggest that females offset the greater energy requirements of gamete production by generating less tissue, resulting in differences in the apparent timing of density-band formation. Such gender-related offsets may be common in other corals and require that environmental reconstructions be made from sexed colonies and that, in fossil corals where sex cannot be determined, reconstructions must be duplicated in different colonies.
10 CFR 71.55 - General requirements for fissile material packages.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system so that, under the following conditions, maximum reactivity of the fissile material would be... to cause maximum reactivity consistent with the chemical and physical form of the material; and (4...
10 CFR 71.55 - General requirements for fissile material packages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system so that, under the following conditions, maximum reactivity of the fissile material would be... to cause maximum reactivity consistent with the chemical and physical form of the material; and (4...
10 CFR 71.55 - General requirements for fissile material packages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system so that, under the following conditions, maximum reactivity of the fissile material would be... to cause maximum reactivity consistent with the chemical and physical form of the material; and (4...
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.
Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D
2013-01-01
We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.
Kinetic approach to the study of froth flotation applied to a lepidolite ore
NASA Astrophysics Data System (ADS)
Vieceli, Nathália; Durão, Fernando O.; Guimarães, Carlos; Nogueira, Carlos A.; Pereira, Manuel F. C.; Margarido, Fernanda
2016-07-01
The number of published studies related to the optimization of lithium extraction from low-grade ores has increased as the demand for lithium has grown. However, no study related to the kinetics of the concentration stage of lithium-containing minerals by froth flotation has yet been reported. To establish a factorial design of batch flotation experiments, we conducted a set of kinetic tests to determine the most selective alternative collector, define a range of pulp pH values, and estimate a near-optimum flotation time. Both collectors (Aeromine 3000C and Armeen 12D) provided the required flotation selectivity, although this selectivity was lost in the case of pulp pH values outside the range between 2 and 4. Cumulative mineral recovery curves were used to adjust a classical kinetic model that was modified with a non-negative parameter representing a delay time. The computation of the near-optimum flotation time as the maximizer of a separation efficiency (SE) function must be performed with caution. We instead propose to define the near-optimum flotation time as the time interval required to achieve 95%-99% of the maximum value of the SE function.
Desensitization and recovery of phototropic responsiveness in Arabidopsis thaliana
NASA Technical Reports Server (NTRS)
Janoudi, A. K.; Poff, K. L.
1993-01-01
Phototropism is induced by blue light, which also induces desensitization, a partial or total loss of phototropic responsiveness. The fluence and fluence-rate dependence of desensitization and recovery from desensitization have been measured for etiolated and red light (669-nm) preirradiated Arabidopsis thaliana seedlings. The extent of desensitization increased as the fluence of the desensitizing 450-nm light was increased from 0.3 to 60 micromoles m-2 s-1. At equal fluences, blue light caused more desensitization when given at a fluence rate of 1.0 micromole m-2 s-1 than at 0.3 micromole m-2 s-1. In addition, seedlings irradiated with blue light at the higher fluence rate required a longer recovery time than seedlings irradiated at the lower fluence rate. A red light preirradiation, probably mediated via phytochrome, decreased the time required for recovery from desensitization. The minimum time for detectable recovery was about 65 s, and the maximum time observed was about 10 min. It is proposed that the descending arm of the fluence-response relationship for first positive phototropism is a consequence of desensitization, and that the time threshold for second positive phototropism establishes a period during which recovery from desensitization occurs.
An operating system for future aerospace vehicle computer systems
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.
1984-01-01
The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Engineering Design of ITER Prototype Fast Plant System Controller
NASA Astrophysics Data System (ADS)
Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.
2011-08-01
The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.
A criterion for maximum resin flow in composite materials curing process
NASA Astrophysics Data System (ADS)
Lee, Woo I.; Um, Moon-Kwang
1993-06-01
On the basis of Springer's resin flow model, a criterion for maximum resin flow in autoclave curing is proposed. Validity of the criterion was proved for two resin systems (Fiberite 976 and Hercules 3501-6 epoxy resin). The parameter required for the criterion can be easily estimated from the measured resin viscosity data. The proposed criterion can be used in establishing the proper cure cycle to ensure maximum resin flow and, thus, the maximum compaction.
Models of compacted fine-grained soils used as mineral liner for solid waste
NASA Astrophysics Data System (ADS)
Sivrikaya, Osman
2008-02-01
To prevent the leakage of pollutant liquids into groundwater and sublayers, the compacted fine-grained soils are commonly utilized as mineral liners or a sealing system constructed under municipal solid waste and other containment hazardous materials. This study presents the correlation equations of the compaction parameters required for construction of a mineral liner system. The determination of the characteristic compaction parameters, maximum dry unit weight ( γ dmax) and optimum water content ( w opt) requires considerable time and great effort. In this study, empirical models are described and examined to find which of the index properties correlate well with the compaction characteristics for estimating γ dmax and w opt of fine-grained soils at the standard compactive effort. The compaction data are correlated with different combinations of gravel content ( G), sand content ( S), fine-grained content (FC = clay + silt), plasticity index ( I p), liquid limit ( w L) and plastic limit ( w P) by performing multilinear regression (MLR) analyses. The obtained correlations with statistical parameters are presented and compared with the previous studies. It is found that the maximum dry unit weight and optimum water content have a considerably good correlation with plastic limit in comparison with liquid limit and plasticity index.
An 8-PSK TDMA uplink modulation and coding system
NASA Technical Reports Server (NTRS)
Ames, S. A.
1992-01-01
The combination of 8-phase shift keying (8PSK) modulation and greater than 2 bits/sec/Hz drove the design of the Nyquist filter to one specified to have a rolloff factor of 0.2. This filter when built and tested was found to produce too much intersymbol interference and was abandoned for a design with a rolloff factor of 0.4. The preamble is limited to 100 bit periods of the uncoded bit period of 5 ns for a maximum preamble length of 500 ns or 40 8PSK symbol times at 12.5 ns per symbol. For 8PSK modulation, the required maximum degradation of 1 dB in -20 dB cochannel interference (CCI) drove the requirement for forward error correction coding. In this contract, the funding was not sufficient to develop the proposed codec so the codec was limited to a paper design during the preliminary design phase. The mechanization of the demodulator is digital, starting from the output of the analog to digital converters which quantize the outputs of the quadrature phase detectors. This approach is amenable to an application specific integrated circuit (ASIC) replacement in the next phase of development.
Schomberg, Dominic; Wang, Anyi; Marshall, Hope; Miranpuri, Gurwattan; Sillay, Karl
2013-04-01
Convection enhanced delivery (CED) is a technique using infusion convection currents to deliver therapeutic agents into targeted regions of the brain. Recently, CED is gaining significant acceptance for use in gene therapy of Parkinson's disease (PD) employing direct infusion into the brain. CED offers advantages in that it targets local areas of the brain, bypasses the blood-brain barrier (BBB), minimizes systemic toxicity of the therapeutics, and allows for delivery of larger molecules that diffusion driven methods cannot achieve. Investigating infusion characteristics such as backflow and morphology is important in developing standard and effective protocols in order to successfully deliver treatments into the brain. Optimizing clinical infusion protocols may reduce backflow, improve final infusion cloud morphology, and maximize infusate penetrance into targeted tissue. The purpose of the current study was to compare metrics during ramped-rate and continuous-rate infusions using two different catheters in order to optimize current infusion protocols. Occasionally, the infusate refluxes proximally up the catheter tip, known as backflow, and minimizing this can potentially reduce undesirable effects in the clinical setting. Traditionally, infusions are performed at a constant rate throughout the entire duration, and backflow is minimized only by slow infusion rates, which increases the time required to deliver the desired amount of infusate. In this study, we investigate the effects of ramping and various infusion rates on backflow and infusion cloud morphology. The independent parameters in the study are: ramping, maximum infusion rate, time between rate changes, and increments of rate changes. Backflow was measured using two methods: i) at the point of pressure stabilization within the catheter, and ii) maximum backflow as shown by video data. Infusion cloud morphology was evaluated based on the height-to-width ratio of each infusion cloud at the end of each experiment. Results were tabulated and statistically analyzed to identify any significant differences between protocols. The experimental results show that CED rampedrate infusion protocols result in smaller backflow distances and more spherical cloud morphologies compared to continuous-rate infusion protocols ending at the same maximum infusion rate. Our results also suggest internal-line pressure measurements can approximate the time-point at which backflow ceases. Our findings indicate that ramping CED infusion protocols can potentially minimize backflow and produce more spherical infusion clouds. However, further research is required to determine the strength of this correlation, especially in relation to maximum infusion rates.
The Impacts of Rising Temperatures on Aircraft Takeoff Performance
NASA Technical Reports Server (NTRS)
Coffel, Ethan; Thompson, Terence R.; Horton, Radley M.
2017-01-01
Steadily rising mean and extreme temperatures as a result of climate change will likely impact the air transportation system over the coming decades. As air temperatures rise at constant pressure, air density declines, resulting in less lift generation by an aircraft wing at a given airspeed and potentially imposing a weight restriction on departing aircraft. This study presents a general model to project future weight restrictions across a fleet of aircraft with different takeoff weights operating at a variety of airports. We construct performance models for five common commercial aircraft and 19 major airports around the world and use projections of daily temperatures from the CMIP5 model suite under the RCP 4.5 and RCP 8.5 emissions scenarios to calculate required hourly weight restriction. We find that on average, 10 - 30% of annual flights departing at the time of daily maximum temperature may require some weight restriction below their maximum takeoff weights, with mean restrictions ranging from 0.5 to 4% of total aircraft payload and fuel capacity by mid- to late century. Both mid-sized and large aircraft are affected, and airports with short runways and high temperatures, or those at high elevations, will see the largest impacts. Our results suggest that weight restriction may impose a non-trivial cost on airlines and impact aviation operations around the world and that adaptation may be required in aircraft design, airline schedules, and/or runway lengths.
The Impact of Rising Temperatures on Aircraft Takeoff Performance
NASA Astrophysics Data System (ADS)
Coffel, E.; Horton, R. M.; Thompson, T. R.
2017-12-01
Steadily rising mean and extreme temperatures as a result of climate change will likely impact the air transportation system over the coming decades. As air temperatures rise at constant pressure, air density declines, resulting in less lift generation by an aircraft wing at a given airspeed and potentially imposing a weight restriction on departing aircraft. This study presents a general model to project future weight restrictions across a fleet of aircraft with different takeoff weights operating at a variety of airports. We construct performance models for five common commercial aircraft and 19 major airports around the world and use projections of daily temperatures from the CMIP5 model suite under the RCP 4.5 and RCP 8.5 emissions scenarios to calculate required hourly weight restriction. We find that on average, 10-30% of annual flights departing at the time of daily maximum temperature may require some weight restriction below their maximum takeoff weights, with mean restrictions ranging from 0.5 to 4% of total aircraft payload and fuel capacity by mid- to late century. Both mid-sized and large aircraft are affected, and airports with short runways and high tempera- tures, or those at high elevations, will see the largest impacts. Our results suggest that weight restriction may impose a non-trivial cost on airlines and impact aviation operations around the world and that adaptation may be required in aircraft design, airline schedules, and/or runway lengths.
SU-E-T-197: Helical Cranial-Spinal Treatments with a Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J; Bernard, D; Liao, Y
2014-06-01
Purpose: Craniospinal irradiation (CSI) of systemic disease requires a high level of beam intensity modulation to reduce dose to bone marrow and other critical structures. Current helical delivery machines can take 30 minutes or more of beam-on time to complete these treatments. This pilot study aims to test the feasibility of performing helical treatments with a conventional linear accelerator using longitudinal couch travel during multiple gantry revolutions. Methods: The VMAT optimization package of the Eclipse 10.0 treatment planning system was used to optimize pseudo-helical CSI plans of 5 clinical patient scans. Each gantry revolution was divided into three 120° arcsmore » with each isocenter shifted longitudinally. Treatments requiring more than the maximum 10 arcs used multiple plans with each plan after the first being optimized including the dose of the others (Figure 1). The beam pitch was varied between 0.2 and 0.9 (couch speed 5- 20cm/revolution and field width of 22cm) and dose-volume histograms of critical organs were compared to tomotherapy plans. Results: Viable pseudo-helical plans were achieved using Eclipse. Decreasing the pitch from 0.9 to 0.2 lowered the maximum lens dose by 40%, the mean bone marrow dose by 2.1% and the maximum esophagus dose by 17.5%. (Figure 2). Linac-based helical plans showed dose results comparable to tomotherapy delivery for both target coverage and critical organ sparing, with the D50 of bone marrow and esophagus respectively 12% and 31% lower in the helical linear accelerator plan (Figure 3). Total mean beam-on time for the linear accelerator plan was 8.3 minutes, 54% faster than the tomotherapy average for the same plans. Conclusions: This pilot study has demonstrated the feasibility of planning pseudo-helical treatments for CSI targets using a conventional linac and dynamic couch movement, and supports the ongoing development of true helical optimization and delivery.« less
Chantre, Guillermo R; Batlla, Diego; Sabbatini, Mario R; Orioli, Gustavo
2009-06-01
Models based on thermal-time approaches have been a useful tool for characterizing and predicting seed germination and dormancy release in relation to time and temperature. The aims of the present work were to evaluate the relative accuracy of different thermal-time approaches for the description of germination in Lithospermum arvense and to develop an after-ripening thermal-time model for predicting seed dormancy release. Seeds were dry-stored at constant temperatures of 5, 15 or 24 degrees C for up to 210 d. After different storage periods, batches of 50 seeds were incubated at eight constant temperature regimes of 5, 8, 10, 13, 15, 17, 20 or 25 degrees C. Experimentally obtained cumulative-germination curves were analysed using a non-linear regression procedure to obtain optimal population thermal parameters for L. arvense. Changes in these parameters were described as a function of after-ripening thermal-time and storage temperature. The most accurate approach for simulating the thermal-germination response of L. arvense was achieved by assuming a normal distribution of both base and maximum germination temperatures. The results contradict the widely accepted assumption of a single T(b) value for the entire seed population. The after-ripening process was characterized by a progressive increase in the mean maximum germination temperature and a reduction in the thermal-time requirements for germination at sub-optimal temperatures. The after-ripening thermal-time model developed here gave an acceptable description of the observed field emergence patterns, thus indicating its usefulness as a predictive tool to enhance weed management tactics.
Chen, Ming-Kai; Menard, David H; Cheng, David W
2016-03-01
In pursuit of as-low-as-reasonably-achievable (ALARA) doses, this study investigated the minimal required radioactivity and corresponding imaging time for reliable semiquantification in PET/CT imaging. Using a phantom containing spheres of various diameters (3.4, 2.1, 1.5, 1.2, and 1.0 cm) filled with a fixed (18)F-FDG concentration of 165 kBq/mL and a background concentration of 23.3 kBq/mL, we performed PET/CT at multiple time points over 20 h of radioactive decay. The images were acquired for 10 min at a single bed position for each of 10 half-lives of decay using 3-dimensional list mode and were reconstructed into 1-, 2-, 3-, 4-, 5-, and 10-min acquisitions per bed position using an ordered-subsets expectation maximum algorithm with 24 subsets and 2 iterations and a gaussian 2-mm filter. SUVmax and SUVavg were measured for each sphere. The minimal required activity (±10%) for precise SUVmax semiquantification in the spheres was 1.8 kBq/mL for an acquisition of 10 min, 3.7 kBq/mL for 3-5 min, 7.9 kBq/mL for 2 min, and 17.4 kBq/mL for 1 min. The minimal required activity concentration-acquisition time product per bed position was 10-15 kBq/mL⋅min for reproducible SUV measurements within the spheres without overestimation. Using the total radioactivity and counting rate from the entire phantom, we found that the minimal required total activity-time product was 17 MBq⋅min and the minimal required counting rate-time product was 100 kcps⋅min. Our phantom study determined a threshold for minimal radioactivity and acquisition time for precise semiquantification in (18)F-FDG PET imaging that can serve as a guide in pursuit of achieving ALARA doses. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Al-Shargabi, Mohammed A; Shaikh, Asadullah; Ismail, Abdulsamad S
2016-01-01
Optical burst switching (OBS) networks have been attracting much consideration as a promising approach to build the next generation optical Internet. A solution for enhancing the Quality of Service (QoS) for high priority real time traffic over OBS with the fairness among the traffic types is absent in current OBS' QoS schemes. In this paper we present a novel Real Time Quality of Service with Fairness Ratio (RT-QoSFR) scheme that can adapt the burst assembly parameters according to the traffic QoS needs in order to enhance the real time traffic QoS requirements and to ensure the fairness for other traffic. The results show that RT-QoSFR scheme is able to fulfill the real time traffic requirements (end to end delay, and loss rate) ensuring the fairness for other traffics under various conditions such as the type of real time traffic and traffic load. RT-QoSFR can guarantee that the delay of the real time traffic packets does not exceed the maximum packets transfer delay value. Furthermore, it can reduce the real time traffic packets loss, at the same time guarantee the fairness for non real time traffic packets by determining the ratio of real time traffic inside the burst to be 50-60%, 30-40%, and 10-20% for high, normal, and low traffic loads respectively.
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1977-01-01
A procedure is presented for designing air-gapped energy-storage reactors for nine different dc-to-dc converters resulting from combinations of three single-winding power stages for voltage stepup, current stepup and voltage stepup/current stepup and three controllers with control laws that impose constant-frequency, constant transistor on-time and constant transistor off-time operation. The analysis, based on the energy-transfer requirement of the reactor, leads to a simple relationship for the required minimum volume of the air gap. Determination of this minimum air gap volume then permits the selection of either an air gap or a cross-sectional core area. Having picked one parameter, the minimum value of the other immediately leads to selection of the physical magnetic structure. Other analytically derived equations are used to obtain values for the required turns, the inductance, and the maximum rms winding current. The design procedure is applicable to a wide range of magnetic material characteristics and physical configurations for the air-gapped magnetic structure.
Structural Analysis for the American Airlines Flight 587 Accident Investigation: Global Analysis
NASA Technical Reports Server (NTRS)
Young, Richard D.; Lovejoy, Andrew E.; Hilburger, Mark W.; Moore, David F.
2005-01-01
NASA Langley Research Center (LaRC) supported the National Transportation Safety Board (NTSB) in the American Airlines Flight 587 accident investigation due to LaRC's expertise in high-fidelity structural analysis and testing of composite structures and materials. A Global Analysis Team from LaRC reviewed the manufacturer s design and certification procedures, developed finite element models and conducted structural analyses, and participated jointly with the NTSB and Airbus in subcomponent tests conducted at Airbus in Hamburg, Germany. The Global Analysis Team identified no significant or obvious deficiencies in the Airbus certification and design methods. Analysis results from the LaRC team indicated that the most-likely failure scenario was failure initiation at the right rear main attachment fitting (lug), followed by an unstable progression of failure of all fin-to-fuselage attachments and separation of the VTP from the aircraft. Additionally, analysis results indicated that failure initiates at the final observed maximum fin loading condition in the accident, when the VTP was subjected to loads that were at minimum 1.92 times the design limit load condition for certification. For certification, the VTP is only required to support loads of 1.5 times design limit load without catastrophic failure. The maximum loading during the accident was shown to significantly exceed the certification requirement. Thus, the structure appeared to perform in a manner consistent with its design and certification, and failure is attributed to VTP loads greater than expected.
Healy, G M; Teleki, S; von Seefried, A; Walton, M J; Macmorine, H G
1971-01-01
An improved tissue culture basal medium, CMRL-1969, supplemented with serum, has been evaluated by measuring the growth responses of primary cultures of trypsin-dispersed monkey kidney cells (PMKC) and of an established culture of a human diploid cell strain (HDCS). Medium H597, an early modification of medium 199 which has been used successfully in the preparation of poliomyelitis vaccine for 15 years, was used for comparison. In addition, parallel testing was done with Basal Medium Eagle (BME) widely used for the growth of HDCS. The improvements in basal medium CMRL-1969 are attributed to changes in amino acid concentrations, in vitamin composition, and, in particular, to enhanced buffering capacity. The latter has been achieved by the use of free-base amino acids and by increasing the dibasic sodium phosphate. The new medium has already been used to advantage for the production of polioviruses in PMKC where equivalent titers were obtained from cultures initiated with 70% of the number of cells required with earlier media. The population-doubling time was reduced in this system. Also, with small inocula of HDCS, the time required to obtain maximum cell yield was shorter with CMRL-1969 than with BME. Both media were supplemented with 10% calf serum. Maximum cell yields after repeated subcultivation in the new basal medium were greatly increased and the stability of the strain, as shown by chromosomal analysis, was not affected. Basal medium CMRL-1969 can be prepared easily in liquid or powdered form.
Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J
2014-09-01
The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.
Maximum Achievable Control Technology Standards in Region 7
Maximum Achievable Control Technology Standards (MACTs) are applicable requirements under the Title V operating permit program. This is a resource for permit writers and reviewers to learn about the rules and explore other helpful tools.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Vibration control by limiting the maximum axial forces in space trusses
NASA Technical Reports Server (NTRS)
Chawla, Vikas; Utku, Senol; Wada, Ben K.
1993-01-01
Proposed here is a method of vibration control based on limiting the maximum axial forces in the active members of an adaptive truss. The actuators simulate elastic rigid-plastic behavior and consume the vibrational energy as work. The method is applicable to both statically determinate as well as indeterminate truss structures. However, for energy efficient control of statistically indeterminate trusses extra actuators may be provided on the redundant bars. An energy formulation relating the various control parameters is derived to get an estimate of the control time. Since the simulation of elastic rigid-plastic behavior requires a piecewise linear control law, a general analytical solution is not possible. Numerical simulation by step-by-step integration is performed to simulate the control of an example truss structure. The problems of application to statically indeterminate trusses and optimal actuator placement are identified for future work.
Losses in chopper-controlled DC series motors
NASA Technical Reports Server (NTRS)
Hamilton, H. B.
1982-01-01
Motors for electric vehicle (EV) applications must have different features than dc motors designed for industrial applications. The EV motor application is characterized by the following requirements: (1) the need for highest possible efficiency from light load to overload, for maximum EV range, (2) large short time overload capability (The ratio of peak to average power varies from 5/1 in heavy city traffic to 3/1 in suburban driving situations) and (3) operation from power supply voltage levels of 84 to 144 volts (probably 120 volts maximum). A test facility utilizing a dc generator as a substitute for a battery pack was designed and utilized. Criteria for the design of such a facility are presented. Two motors, differing in design detail, commercially available for EV use were tested. Losses measured are discussed, as are waves forms and their harmonic content, the measurements of resistance and inductance, EV motor/chopper application criteria, and motor design considerations.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Reproduction of the cold-water coral Primnoella chilensis (Philippi, 1894)
NASA Astrophysics Data System (ADS)
Rossin, Ashley M.; Waller, Rhian G.; Försterra, Gunter
2017-07-01
This study examined the reproduction of a cold-water coral, Primnoella chilensis (Philippi, 1894) from the Comau and Reñihué fjords in Chilean Patagonia. Samples were collected in September and November of 2012 and April, June, and September of 2013 from three sites within the two fjords. The sexuality, reproductive mode, spermatocyst stage, oocyte size, and fecundity were determined using histological techniques. This species is gonochoristic with one aberrant hermaphrodite identified in this study. Reproduction was found to be seasonal, with the initiation of oogenesis in September and suggested a broadcast spawning event between June and September. The maximum oocyte size was 752.96 μm, suggesting a lecithotrophic larvae. The maximum fecundity was 36 oocytes per polyp. Male individuals were only found in April and June. In June, all four spermatocyst stages were present. This suggests that spermatogenesis requires less time than oogenesis in P. chilensis.
Refractory metal alloys and composites for space power systems
NASA Technical Reports Server (NTRS)
Stephens, Joseph R.; Petrasek, Donald W.; Titran, Robert H.
1988-01-01
Space power requirements for future NASA and other U.S. missions will range from a few kilowatts to megawatts of electricity. Maximum efficiency is a key goal of any power system in order to minimize weight and size so that the space shuttle may be used a minimum number of times to put the power supply into orbit. Nuclear power has been identified as the primary source to meet these high levels of electrical demand. One way to achieve maximum efficiency is to operate the power supply, energy conversion system, and related components at relatively high temperatures. NASA Lewis Research Center has undertaken a research program on advanced technology of refractory metal alloys and composites that will provide baseline information for space power systems in the 1900's and the 21st century. Basic research on the tensile and creep properties of fibers, matrices, and composites is discussed.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.
Importance of limiting hohlraum leaks at cryogenic temperatures on NIF targets
Bhandarkar, Suhas; Teslich, Nick; Haid, Ben; ...
2017-08-18
Inertial confinement fusion targets are complex systems designed to allow fine control of temperature and pressure for making precise spherical ice layers of hydrogen isotopes at cryogenic temperatures. We discuss the various technical considerations for a maximum leak rate based on heat load considerations. This maximum flow rate turns out to bemore » $$5\\times 10^{-6}$$ standard cc per second, which can be caused by an orifice less than half a micron in diameter. This makes the identification of the location and resolution of the leak a significant challenge. To illustrate this, we showcase one example of a peculiar failure mode that appeared suddenly but persisted whereby target production yield was severely lowered. Identification of the leak source and the root cause requires very careful analysis of multiple thermomechanical aspects to ensure that the end solution is indeed the right remedy and is robust.« less
McCarthy, Peter M.
2006-01-01
The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.
Novel high-frequency energy-efficient pulsed-dc generator for capacitively coupled plasma discharge
NASA Astrophysics Data System (ADS)
Mamun, Md Abdullah Al; Furuta, Hiroshi; Hatta, Akimitsu
2018-03-01
The circuit design, assembly, and operating tests of a high-frequency and high-voltage (HV) pulsed dc generator (PDG) for capacitively coupled plasma (CCP) discharge inside a vacuum chamber are reported. For capacitive loads, it is challenging to obtain sharp rectangular pulses with fast rising and falling edges, requiring intense current for quick charging and discharging. The requirement of intense current generally limits the pulse operation frequency. In this study, we present a new type of PDG consisting of a pair of half-resonant converters and a constant current-controller circuit connected with HV solid-state power switches that can deliver almost rectangular high voltage pulses with fast rising and falling edges for CCP discharge. A prototype of the PDG is assembled to modulate from a high-voltage direct current (HVdc) input into a pulsed HVdc output, while following an input pulse signal and a set current level. The pulse rise time and fall time are less than 500 ns and 800 ns, respectively, and the minimum pulse width is 1 µs. The maximum voltage for a negative pulse is 1000 V, and the maximum repetition frequency is 500 kHz. During the pulse on time, the plasma discharge current is controlled steadily at the set value. The half-resonant converters in the PDG perform recovery of the remaining energy from the capacitive load at every termination of pulse discharge. The PDG performed with a high energy efficiency of 85% from the HVdc input to the pulsed dc output at a repetition rate of 1 kHz and with stable plasma operation in various discharge conditions. The results suggest that the developed PDG can be considered to be more efficient for plasma processing by CCP.
Novel high-frequency energy-efficient pulsed-dc generator for capacitively coupled plasma discharge.
Mamun, Md Abdullah Al; Furuta, Hiroshi; Hatta, Akimitsu
2018-03-01
The circuit design, assembly, and operating tests of a high-frequency and high-voltage (HV) pulsed dc generator (PDG) for capacitively coupled plasma (CCP) discharge inside a vacuum chamber are reported. For capacitive loads, it is challenging to obtain sharp rectangular pulses with fast rising and falling edges, requiring intense current for quick charging and discharging. The requirement of intense current generally limits the pulse operation frequency. In this study, we present a new type of PDG consisting of a pair of half-resonant converters and a constant current-controller circuit connected with HV solid-state power switches that can deliver almost rectangular high voltage pulses with fast rising and falling edges for CCP discharge. A prototype of the PDG is assembled to modulate from a high-voltage direct current (HVdc) input into a pulsed HVdc output, while following an input pulse signal and a set current level. The pulse rise time and fall time are less than 500 ns and 800 ns, respectively, and the minimum pulse width is 1 µs. The maximum voltage for a negative pulse is 1000 V, and the maximum repetition frequency is 500 kHz. During the pulse on time, the plasma discharge current is controlled steadily at the set value. The half-resonant converters in the PDG perform recovery of the remaining energy from the capacitive load at every termination of pulse discharge. The PDG performed with a high energy efficiency of 85% from the HVdc input to the pulsed dc output at a repetition rate of 1 kHz and with stable plasma operation in various discharge conditions. The results suggest that the developed PDG can be considered to be more efficient for plasma processing by CCP.
NASA Astrophysics Data System (ADS)
Taoka, Hidekazu; Higuchi, Kenichi; Sawahashi, Mamoru
This paper presents experimental results in real propagation channel environments of real-time 1-Gbps packet transmission using antenna-dependent adaptive modulation and channel coding (AMC) with 4-by-4 MIMO multiplexing in the downlink Orthogonal Frequency Division Multiplexing (OFDM) radio access. In the experiment, Maximum Likelihood Detection employing QR decomposition and the M-algorithm (QRM-MLD) with adaptive selection of the surviving symbol replica candidates (ASESS) is employed to achieve such a high data rate at a lower received signal-to-interference plus background noise power ratio (SINR). The field experiments, which are conducted at the average moving speed of 30km/h, show that real-time packet transmission of greater than 1Gbps in a 100-MHz channel bandwidth (i.e., 10bits/second/Hz) is achieved at the average received SINR of approximately 13.5dB using 16QAM modulation and turbo coding with the coding rate of 8/9. Furthermore, we show that the measured throughput of greater than 1Gbps is achieved at the probability of approximately 98% in a measurement course, where the maximum distance from the cell site was approximately 300m with the respective transmitter and receiver antenna separation of 1.5m and 40cm with the total transmission power of 10W. The results also clarify that the minimum required receiver antenna spacing is approximately 10cm (1.5 carrier wave length) to suppress the loss in the required received SINR at 1-Gbps throughput to within 1dB compared to that assuming the fading correlation between antennas of zero both under non-line-of-sight (NLOS) and line-of-sight (LOS) conditions.
Visually guided male urinary catheterization: a feasibility study.
Willette, Paul A; Banks, Kevin; Shaffer, Lynn
2013-01-01
Ten percent to 15% of urinary catheterizations involve complications. New techniques to reduce risks and pain are indicated. This study examines the feasibility and safety of male urinary catheterization by nursing personnel using a visually guided device in a clinical setting. The device, a 0.6-mm fiber-optic bundle inside a 14F triple-lumen flexible urinary catheter with a lubricious coating, irrigation port, and angled tip, connects to a camera, allowing real-time viewing of progress on a color monitor. Two emergency nurses were trained to use the device. Male patients 18 years or older presenting to the emergency department with an indication for urinary catheterization using a standard Foley or Coudé catheter were eligible to participate in the study. Exclusion criteria were a current suprapubic tube or gross hematuria prior to the procedure. Twenty-five patients were enrolled. Data collected included success of placement, total procedure time, pre-procedure pain and maximum pain during the procedure, gross hematuria, abnormalities or injuries identified if catheterization failed, occurrence of and reason for equipment failures, and number of passes required for placement. All catheters were successfully placed. The median number of passes required was 1. For all but one patient, procedure time was ≤ 17 minutes. A median increase in pain scores of 1 point from baseline to the maximum was reported. Gross hematuria was observed in 2 patients. The success rate for placement of a Foley catheter with the visually guided device was 100%, indicating its safety, accuracy, and feasibility in a clinical setting. Minimal pain was associated with the procedure. Copyright © 2013 Emergency Nurses Association. Published by Mosby, Inc. All rights reserved.
47 CFR 15.407 - General technical requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... maximum antenna gain does not exceed 6 dBi. In addition, the maximum power spectral density shall not exceed 17 dBm in any 1 megahertz band. If transmitting antennas of directional gain greater than 6 dBi... reduced by the amount in dB that the directional gain of the antenna exceeds 6 dBi. The maximum e.i.r.p...
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Capabilities of GRO/OSSE for observing solar flares
NASA Technical Reports Server (NTRS)
Kurfess, J. D.; Johnson, W. N.; Share, G. H.; Hulburt, E. O.; Matz, S. M.; Murphy, R. J.
1989-01-01
The launch of the Gamma Ray Observatory (GRO) near solar maximum makes solar flare studies early in the mission particularly advantageous. The Oriented Scintillation Spectrometer Experiment (OSSE) on GRO, covering the energy range 0.05 to 150 MeV, has some significant advantages over the previous generation of satellite-borne gamma-ray detectors for solar observations. The OSSE detectors will have about 10 times the effective area of the Gamma-Ray Spectrometer (GRS) on Solar Maximum Mission (SMM) for both photons and high-energy neutrons. The OSSE also has the added capability of distinguishing between high-energy neutrons and photons directly. The OSSE spectral accumulation time (approx. 4s) is four times faster than that of the SMM/GRS; much better time resolution is available in selected energy ranges. These characteristics will allow the investigation of particle acceleration in flares based on the evolution of the continuum and nuclear line components of flare spectra, nuclear emission in small flares, the anisotropy of continuum emission in small flares, and the relative intensities of different nuclear lines. The OSSE observational program will be devoted primarily to non-solar sources. Therefore, solar observations require planning and special configurations. The instrumental and operational characteristics of OSSE are discussed in the context of undertaking solar observations. The opportunities for guest investigators to participate in solar flare studies with OSSE is also presented.
1980-11-01
IZPT,ITH,IDEL,NTAP7,IAR,IAN,IUB, IGB(7) ,IVB,IU,IV,IW,IVA,IWA, ICP, IPHI,IYB,NAG,NAP,NAV,NAS, NASHK, NAFLD ,IAO,IDO,ISKO,TYIMI,IZIM,ISVN,ISKP,NRING,IROW...locations in blank common required in SOLVE NASHIK maximum locations in blank common required in BSHOCK NAFLD maximum locations in blank common
Code of Federal Regulations, 2010 CFR
2010-07-01
... Measurements Required, and Maximum Discrepancy Specification C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges..., June 22, 2010, table C-1 to subpart C was revised, effective Aug. 23, 2010. For the convenience of the...
Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems
NASA Astrophysics Data System (ADS)
Mazmuder, R. K.; Haidar, S.
1992-12-01
An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.
Das, Somak; Swain, Sudeepta Kumar; Addala, Pavan Kumar; Balasubramaniam, Ramakrishnan; Gopakumar, C V; Zirpe, Dinesh; Renganathan, Kirubakaran; Kollu, Harsha; Patel, Darshan; Vibhute, Bipin B; Rao, Prashantha S; Krishnan, Elankumaran; Gopasetty, Mahesh; Khakhar, Anand K; Vaidya, Anil; Ramamurthy, Anand
2016-12-01
Nations with emerging deceased-donor liver transplantation programs, such as India, face problems associated with poor donor maintenance. Cold ischemic time (CIT) is typically maintained short by matching donor organ recovery and recipient hepatectomy to achieve maximum favorable outcome. We analyzed different extended criteria donor factors including donor acidosis, which may act as a surrogate marker of poor donor maintenance, to quantify the risk of primary nonfunction (PNF) or initial poor function (IPF). A single-center retrospective outcome analysis of prospectively collected data of patients undergoing deceased-donor liver transplantation over 2 years to determine the impact of different extended criteria donor factors on IPF and PNF. From March 2013 to February 2015, a total of 84 patients underwent deceased-donor liver transplantation. None developed PNF. Thirteen (15.5%) patients developed IPF. Graft macrosteatosis and donor acidosis were only related to IPF ( P = .002 and P = .032, respectively). Cold ischemic time was maintained short (81 cases ≤8 hours, maximum 11 hours) in all cases. Poor donor maintenance as evidenced by donor acidosis and graft macrosteatosis had significant impact in developing IPF when CIT is kept short. Similar study with larger sample size is required to establish extended criteria cutoff values.
Liu, Nuo; Jiang, Jianguo; Yan, Feng; Gao, Yuchen; Meng, Yuan; Aihemaiti, Aikelaimu; Ju, Tongyao
2018-07-01
The positive effect of sonication on volatile fatty acid (VFA) and hydrogen production was investigated by batch experiments. Several sonication densities (2, 1.6, and 1.2 W/mL) and times (5, 10, and 15 min) were tested. The optimal sonication condition was ultrasonic density 2 W/mL and ultrasonic time 15 min (2-U15). The FW particle size larger than 50 μm (d > 50 μm) were more susceptible to the sonication treatment than the smaller particle size (d ≤ 50 μm). The SCOD increased and VS reduction accelerated under sonication treatment. The maximum VFA production and the highest proportion of hydrogen in the biogas increased 65.3% and 59.1%, respectively, under the optimal sonication conditions compared to the unsonicated batch. Moreover, a reduction of over 50% in the time required to reach its maximum production was also observed. Butyric acid fermentation type was obtained whether following sonication treatment or not. The composition of key microbial community differed under the various sonication conditions. The genera Clostridium and Parabacteroides are predominantly responsible for VFA generation and both were found to be abundant under the optimal condition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fractionation of distillers dried grains with solubles (DDGS) by sieving and winnowing.
Liu, KeShun
2009-12-01
Four commercial samples of distillers dried grains with solubles (DDGS) were sieved. All sieved fractions except for the pan fraction, constituting about 90% of original mass, were then winnowed with an air blast seed cleaner. Sieving was effective in producing fractions with varying composition. As the particle size decreased, protein and ash contents increased, and total carbohydrate (CHO) decreased. Winnowing sieved fractions was also effective in shifting composition, particularly for larger particle classes. Heavy sub-fractions were enriched in protein, oil and ash, while light sub-fractions were enriched for CHO. For protein, the combination of the two procedures resulted in a maximum 56.4% reduction in a fraction and maximum 60.2% increase in another fraction. As airflow velocity increased, light sub-fraction mass increased, while the compositional difference between the heavy and light sub-fractions decreased. Winnowing three times at a lower velocity was as effective as winnowing one time at a medium velocity. Winnowing the whole DDGS was much less effective than winnowing sieved fractions in changing composition, but sieving winnowed fractions was more effective than sieving whole DDGS. The two combination sequences gave comparable overall effects but sieving followed by winnowing is recommended because it requires less time. Regardless of combinational sequence, the second procedure was more effective in shifting composition than the first procedure.
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-01-01
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-07-17
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.
Ichnological evidence of jökulhlaup deposit recolonization from the Touchet Beds, Mabton, WA, USA
NASA Astrophysics Data System (ADS)
MacEachern, James A.; Roberts, Michael C.
2013-01-01
The late Wisconsinan Touchet Beds section at Mabton, Washington reveals at least seven stacked jökulhlaup deposits, five showing evidence of post-flood recolonization by vertebrates. Tracemakers are attributed to voles or pocket mice (1-3 cm diameter burrows) and pocket gophers or ground squirrels (3-6 cm diameter burrows). The Mount St. Helens S tephra deposited between flood beds contains the invertebrate-generated burrows Naktodemasis and Macanopsis. Estimates of times between floods are based on natal dispersal distances of the likely vertebrate tracemakers (30-50 m median distances; 127-525 m maximum distances) from upland areas containing surviving populations to the Mabton area, a distance of about 7.9 km. Tetrapods would have required at least two to three decades to recolonize these flood beds, based on maximum dispersal distances. Invertebrate recolonization was limited by secondary succession and estimated at only a few years to a decade. These ichnological data support multiple floods from failure of the ice dam at glacial Lake Missoula, separated by hiatal surfaces on the order of decades in duration. Ichnological recolonization times are consistent with published estimates of refill times for glacial Lake Missoula, and complement the other field evidence that points to repeated, autogenically induced flood discharge.
Interpreting the Need for Initial Support to Perform Tandem Stance Tests of Balance
Brach, Jennifer S.; Perera, Subashan; Wert, David M.; VanSwearingen, Jessie M.; Studenski, Stephanie A.
2012-01-01
Background Geriatric rehabilitation reimbursement increasingly requires documented deficits on standardized measures. Tandem stance performance can characterize balance, but protocols are not standardized. Objective The purpose of this study was to explore the impact of: (1) initial support to stabilize in position and (2) maximum hold time on tandem stance tests of balance in older adults. Design A cross-sectional secondary analysis of observational cohort data was conducted. Methods One hundred seventeen community-dwelling older adults (71% female, 12% black) were assigned to 1 of 3 groups based on the need for initial support to perform tandem stance: (1) unable even with support, (2) able only with support, and (3) able without support. The able without support group was further stratified on hold time in seconds: (1) <10 (low), (2) 10 to 29, (medium), and (3) 30 (high). Groups were compared on primary outcomes (gait speed, Timed “Up & Go” Test performance, and balance confidence) using analysis of variance. Results Twelve participants were unable to perform tandem stance, 14 performed tandem stance only with support, and 91 performed tandem stance without support. Compared with the able without support group, the able with support group had statistically or clinically worse performance and balance confidence. No significant differences were found between the able with support group and the unable even with support group on these same measures. Extending the hold time to 30 seconds in a protocol without initial support eliminated ceiling effects for 16% of the study sample. Limitations Small comparison groups, use of a secondary analysis, and lack of generalizability of results were limitations of the study. Conclusions Requiring initial support to stabilize in tandem stance appears to reflect meaningful deficits in balance-related mobility measures, so failing to consider support may inflate balance estimates and confound hold time comparisons. Additionally, 10-second maximum hold times limit discrimination of balance in adults with a higher level of function. For community-dwelling older adults, we recommend timing for at least 30 seconds and documenting initial support for consideration when interpreting performance. PMID:22745198
NASA Technical Reports Server (NTRS)
Mckhann, G.
1977-01-01
Solar array power systems for the space construction base are discussed. Nickel cadmium and nickel hydrogen batteries are equally attractive relative to regenerative fuel cell systems at 5 years life. Further evaluation of energy storage system life (low orbit conditions) is required. Shuttle and solid polymer electrolyte fuel cell technology appears adequate; large units (approximately four times shuttle) are most appropriate and should be studied for a 100 KWe SCB system. A conservative NiH2 battery DOD (18.6%) was elected due to lack of test data and offers considerable improvement potential. Multiorbit load averaging and reserve capacity requirements limit nominal DOD to 30% to 50% maximum, independent of life considerations.
Seals/Secondary Fluid Flows Workshop 1997; Volume II: HSR Engine Special Session
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Editor)
2006-01-01
The High Speed Civil Transport (HSCT) will be the largest engine ever built and operated at maximum conditions for long periods of time. It is being developed collaboratively with NASA, FAA, Boeing-McDonnell Douglas, Pratt & Whitney, and General Electric. This document provides an initial step toward defining high speed research (HSR) sealing needs. The overview for HSR seals includes defining objectives, summarizing sealing and material requirements, presenting relevant seal cross-sections, and identifying technology needs. Overview presentations are given for the inlet, turbomachinery, combustor and nozzle. The HSCT and HSR seal issues center on durability and efficiency of rotating equipment seals, structural seals and high speed bearing and sump seals. Tighter clearances, propulsion system size and thermal requirements challenge component designers.
The discrete prolate spheroidal filter as a digital signal processing tool
NASA Technical Reports Server (NTRS)
Mathews, J. D.; Breakall, J. K.; Karawas, G. K.
1983-01-01
The discrete prolate spheriodall (DPS) filter is one of the glass of nonrecursive finite impulse response (FIR) filters. The DPS filter is superior to other filters in this class in that it has maximum energy concentration in the frequency passband and minimum ringing in the time domain. A mathematical development of the DPS filter properties is given, along with information required to construct the filter. The properties of this filter were compared with those of the more commonly used filters of the same class. Use of the DPS filter allows for particularly meaningful statements of data time/frequency resolution cell values. The filter forms an especially useful tool for digital signal processing.
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
How can we maximize the diagnostic utility of uroflow?: ICI-RS 2017.
Gammie, Andrew; Rosier, Peter; Li, Rui; Harding, Chris
2018-01-09
To gauge the current level of diagnostic utility of uroflowmetry and to suggest areas needing research to improve this. A summary of the debate held at the 2017 meeting of the International Consultation on Incontinence Research Society, with subsequent analysis by the authors. Limited diagnostic sensitivity and specificity exist for maximum flow rates, multiple uroflow measurements, and flow-volume nomograms. There is a lack of clarity in flow rate curve shape description and uroflow time measurement. There is a need for research to combine uroflowmetry with other non-invasive indicators. Better standardizations of test technique, flow-volume nomograms, uroflow shape descriptions, and time measurements are required. © 2017 Wiley Periodicals, Inc.
Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle
NASA Technical Reports Server (NTRS)
Ciepluch, Carl C.
1960-01-01
Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.
Nigg, Claudio R; Motl, Robert W; Horwath, Caroline; Dishman, Rod K
2012-01-01
Objectives Physical activity (PA) research applying the Transtheoretical Model (TTM) to examine group differences and/or change over time requires preliminary evidence of factorial validity and invariance. The current study examined the factorial validity and longitudinal invariance of TTM constructs recently revised for PA. Method Participants from an ethnically diverse sample in Hawaii (N=700) completed questionnaires capturing each TTM construct. Results Factorial validity was confirmed for each construct using confirmatory factor analysis with full-information maximum likelihood. Longitudinal invariance was evidenced across a shorter (3-month) and longer (6-month) time period via nested model comparisons. Conclusions The questionnaires for each validated TTM construct are provided, and can now be generalized across similar subgroups and time points. Further validation of the provided measures is suggested in additional populations and across extended time points. PMID:22778669
A technique for transferring a patient's smile line to a cone beam computed tomography (CBCT) image.
Bidra, Avinash S
2014-08-01
Fixed implant-supported prosthodontic treatment for patients requiring a gingival prosthesis often demands that bone and implant levels be apical to the patient's maximum smile line. This is to avoid the display of the prosthesis-tissue junction (the junction between the gingival prosthesis and natural soft tissues) and prevent esthetic failures. Recording a patient's lip position during maximum smile is invaluable for the treatment planning process. This article presents a simple technique for clinically recording and transferring the patient's maximum smile line to cone beam computed tomography (CBCT) images for analysis. The technique can help clinicians accurately determine the need for and amount of bone reduction required with respect to the maximum smile line and place implants in optimal positions. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Intravenous lipid emulsion alters the hemodynamic response to epinephrine in a rat model.
Carreiro, Stephanie; Blum, Jared; Jay, Gregory; Hack, Jason B
2013-09-01
Intravenous lipid emulsion (ILE) is an adjunctive antidote used in selected critically ill poisoned patients. These patients may also require administration of advanced cardiac life support (ACLS) drugs. Limited data is available to describe interactions of ILE with standard ACLS drugs, specifically epinephrine. Twenty rats with intra-arterial and intravenous access were sedated with isoflurane and split into ILE or normal saline (NS) pretreatment groups. All received epinephrine 15 μm/kg intravenously (IV). Continuous mean arterial pressure (MAP) and heart rate (HR) were monitored until both indices returned to baseline. Standardized t tests were used to compare peak MAP, time to peak MAP, maximum change in HR, time to maximum change in HR, and time to return to baseline MAP/HR. There was a significant difference (p = 0.023) in time to peak MAP in the ILE group (54 s, 95 % CI 44-64) versus the NS group (40 s, 95 % CI 32-48) and a significant difference (p = 0.004) in time to return to baseline MAP in ILE group (171 s, 95 % CI 148-194) versus NS group (130 s, 95 % CI 113-147). There were no significant differences in the peak change in MAP, peak change in HR, time to minimum HR, or time to return to baseline HR between groups. ILE-pretreated rats had a significant difference in MAP response to epinephrine; ILE delayed the peak effect and prolonged the duration of effect of epinephrine on MAP, but did not alter the peak increase in MAP or the HR response.
A stochastic maximum principle for backward control systems with random default time
NASA Astrophysics Data System (ADS)
Shen, Yang; Kuen Siu, Tak
2013-05-01
This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.
Scaling Analysis of Alloy Solidification and Fluid Flow in a Rectangular Cavity
NASA Astrophysics Data System (ADS)
Plotkowski, A.; Fezi, K.; Krane, M. J. M.
A scaling analysis was performed to predict trends in alloy solidification in a side-cooled rectangular cavity. The governing equations for energy and momentum were scaled in order to determine the dependence of various aspects of solidification on the process parameters for a uniform initial temperature and an isothermal boundary condition. This work improved on previous analyses by adding considerations for the cooling bulk fluid flow. The analysis predicted the time required to extinguish the superheat, the maximum local solidification time, and the total solidification time. The results were compared to a numerical simulation for a Al-4.5 wt.% Cu alloy with various initial and boundary conditions. Good agreement was found between the simulation results and the trends predicted by the scaling analysis.
Note: Tesla based pulse generator for electrical breakdown study of liquid dielectrics
NASA Astrophysics Data System (ADS)
Veda Prakash, G.; Kumar, R.; Patel, J.; Saurabh, K.; Shyam, A.
2013-12-01
In the process of studying charge holding capability and delay time for breakdown in liquids under nanosecond (ns) time scales, a Tesla based pulse generator has been developed. Pulse generator is a combination of Tesla transformer, pulse forming line, a fast closing switch, and test chamber. Use of Tesla transformer over conventional Marx generators makes the pulse generator very compact, cost effective, and requires less maintenance. The system has been designed and developed to deliver maximum output voltage of 300 kV and rise time of the order of tens of nanoseconds. The paper deals with the system design parameters, breakdown test procedure, and various experimental results. To validate the pulse generator performance, experimental results have been compared with PSPICE simulation software and are in good agreement with simulation results.
Motion mitigation for lung cancer patients treated with active scanning proton therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu; Dowdell, Stephen; Sharp, Greg
2015-05-15
Purpose: Motion interplay can affect the tumor dose in scanned proton beam therapy. This study assesses the ability of rescanning and gating to mitigate interplay effects during lung treatments. Methods: The treatments of five lung cancer patients [48 Gy(RBE)/4fx] with varying tumor size (21.1–82.3 cm{sup 3}) and motion amplitude (2.9–30.6 mm) were simulated employing 4D Monte Carlo. The authors investigated two spot sizes (σ ∼ 12 and ∼3 mm), three rescanning techniques (layered, volumetric, breath-sampled volumetric) and respiratory gating with a 30% duty cycle. Results: For 4/5 patients, layered rescanning 6/2 times (for the small/large spot size) maintains equivalent uniformmore » dose within the target >98% for a single fraction. Breath sampling the timing of rescanning is ∼2 times more effective than the same number of continuous rescans. Volumetric rescanning is sensitive to synchronization effects, which was observed in 3/5 patients, though not for layered rescanning. For the large spot size, rescanning compared favorably with gating in terms of time requirements, i.e., 2x-rescanning is on average a factor ∼2.6 faster than gating for this scenario. For the small spot size however, 6x-rescanning takes on average 65% longer compared to gating. Rescanning has no effect on normal lung V{sub 20} and mean lung dose (MLD), though it reduces the maximum lung dose by on average 6.9 ± 2.4/16.7 ± 12.2 Gy(RBE) for the large and small spot sizes, respectively. Gating leads to a similar reduction in maximum dose and additionally reduces V{sub 20} and MLD. Breath-sampled rescanning is most successful in reducing the maximum dose to the normal lung. Conclusions: Both rescanning (2–6 times, depending on the beam size) as well as gating was able to mitigate interplay effects in the target for 4/5 patients studied. Layered rescanning is superior to volumetric rescanning, as the latter suffers from synchronization effects in 3/5 patients studied. Gating minimizes the irradiated volume of normal lung more efficiently, while breath-sampled rescanning is superior in reducing maximum doses to organs at risk.« less
Current Pulses Momentarily Enhance Thermoelectric Cooling
NASA Technical Reports Server (NTRS)
Snyder, G. Jeffrey; Fleurial, Jean-Pierre; Caillat, Thierry; Chen, Gang; Yang, Rong Gui
2004-01-01
The rates of cooling afforded by thermoelectric (Peltier) devices can be increased for short times by applying pulses of electric current greater than the currents that yield maximum steady-state cooling. It has been proposed to utilize such momentary enhancements of cooling in applications in which diode lasers and other semiconductor devices are required to operate for times of the order of milliseconds at temperatures too low to be easily obtainable in the steady state. In a typical contemplated application, a semiconductor device would be in contact with the final (coldest) somewhat taller stage of a multistage thermoelectric cooler. Steady current would be applied to the stages to produce steady cooling. Pulsed current would then be applied, enhancing the cooling of the top stage momentarily. The principles of operation are straightforward: In a thermoelectric device, the cooling occurs only at a junction at one end of the thermoelectric legs, at a rate proportional to the applied current. However, Joule heating occurs throughout the device at a rate proportional to the current squared. Hence, in the steady state, the steady temperature difference that the device can sustain increases with current only to the point beyond which the Joule heating dominates. If a pulse of current greater than the optimum current (the current for maximum steady cooling) is applied, then the junction becomes momentarily cooled below its lowest steady temperature until thermal conduction brings the resulting pulse of Joule heat to the junction and thereby heats the junction above its lowest steady temperature. A theoretical and experimental study of such transient thermoelectric cooling followed by transient Joule heating in response to current pulses has been performed. The figure presents results from one of the experiments. The study established the essential parameters that characterize the pulse cooling effect, including the minimum temperature achieved, the maximum temperature overshoot, the time to reach minimum temperature, the time while cooled, and the time between pulses. It was found that at large pulse amplitude, the amount of pulse supercooling is about a fourth of the maximum steady-state temperature difference. For the particular thermoelectric device used in one set of the experiments, the practical optimum pulse amplitude was found to be about 3 times the optimum steady-state current. In a further experiment, a pulse cooler was integrated into a small commercial thermoelectric threestage cooler and found to provide several degrees of additional cooling for a time long enough to operate a semiconductor laser in a gas sensor.
NASA Astrophysics Data System (ADS)
Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi
2010-01-01
In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.
Engineering design constraints of the lunar surface environment
NASA Technical Reports Server (NTRS)
Morrison, D. A.
1992-01-01
Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.
Engineering design constraints of the lunar surface environment
NASA Astrophysics Data System (ADS)
Morrison, D. A.
1992-02-01
Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.
Impact Damage and Strain Rate Effects for Toughened Epoxy Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Minnetyan, Levon
2006-01-01
Structural integrity of composite systems under dynamic impact loading is investigated herein. The GENOA virtual testing software environment is used to implement the effects of dynamic loading on fracture progression and damage tolerance. Combinations of graphite and glass fibers with a toughened epoxy matrix are investigated. The effect of a ceramic coating for the absorption of impact energy is also included. Impact and post impact simulations include verification and prediction of (1) Load and Impact Energy, (2) Impact Damage Size, (3) Maximum Impact Peak Load, (4) Residual Strength, (5) Maximum Displacement, (6) Contribution of Failure Modes to Failure Mechanisms, (7) Prediction of Impact Load Versus Time, and (8) Damage, and Fracture Pattern. A computer model is utilized for the assessment of structural response, progressive fracture, and defect/damage tolerance characteristics. Results show the damage progression sequence and the changes in the structural response characteristics due to dynamic impact. The fundamental premise of computational simulation is that the complete evaluation of composite fracture requires an assessment of ply and subply level damage/fracture processes as the structure is subjected to loads. Simulation results for the graphite/epoxy composite were compared with the impact and tension failure test data, correlation and verification was obtained that included: (1) impact energy, (2) damage size, (3) maximum impact peak load, (4) residual strength, (5) maximum displacement, and (6) failure mechanisms of the composite structure.
Huang, Yize; Jivraj, Jamil; Zhou, Jiaqi; Ramjist, Joel; Wong, Ronnie; Gu, Xijia; Yang, Victor X D
2016-07-25
A surgical laser soft tissue ablation system based on an adjustable 1942 nm single-mode all-fiber Tm-doped fiber laser operating in pulsed or CW mode with nitrogen assistance is demonstrated. Ex vivo ablation on soft tissue targets such as muscle (chicken breast) and spinal cord (porcine) with intact dura are performed at different ablation conditions to examine the relationship between the system parameters and ablation outcomes. The maximum laser average power is 14.4 W, and its maximum peak power is 133.1 W with 21.3 μJ pulse energy. The maximum CW power density is 2.33 × 106 W/cm2 and the maximum pulsed peak power density is 2.16 × 107 W/cm2. The system parameters examined include the average laser power in CW or pulsed operation mode, gain-switching frequency, total ablation exposure time, and the input gas flow rate. The ablation effects were measured by microscopy and optical coherence tomography (OCT) to evaluate the ablation depth, superficial heat-affected zone diameter (HAZD) and charring diameter (CD). Our results conclude that the system parameters can be tailored to meet different clinical requirements such as ablation for soft tissue cutting or thermal coagulation for future applications of hemostasis.
First Trial of Real-time Poloidal Beta Control in KSTAR
NASA Astrophysics Data System (ADS)
Han, Hyunsun; Hahn, S. H.; Bak, J. G.; Walker, M. L.; Woo, M. H.; Kim, J. S.; Kim, Y. J.; Bae, Y. S.; KSTAR Team
2014-10-01
Sustaining the plasma in a stable and a high performance condition is one of the important control issues for future steady state tokamaks. In the 2014 KSTAR campaign, we have developed a real-time poloidal beta (βp) control technique and carried out preliminary experiments to identify its feasibility. In the control system, the βp is calculated in real time using the measured diamagnetic loop signal (DLM03) with coil pickup corrections, and compared with the target value leading to the change of the neutral beam (NB) heating power using a feedback PID control algorithm. To match the required power of NB which is operated with constant voltage, the duty cycles of the modulation were adjusted as the ratio of the required power to the maximum achievable one. This paper will present the overall procedures of the βp control, the βp estimation process implemented in the plasma control system, and the analysis on the preliminary experimental results. This work is supported by the KSTAR research project funded by the Ministry of Science, ICT & Future Planning of Korea.
Source modeling and inversion with near real-time GPS: a GITEWS perspective for Indonesia
NASA Astrophysics Data System (ADS)
Babeyko, A. Y.; Hoechner, A.; Sobolev, S. V.
2010-07-01
We present the GITEWS approach to source modeling for the tsunami early warning in Indonesia. Near-field tsunami implies special requirements to both warning time and details of source characterization. To meet these requirements, we employ geophysical and geological information to predefine a maximum number of rupture parameters. We discretize the tsunamigenic Sunda plate interface into an ordered grid of patches (150×25) and employ the concept of Green's functions for forward and inverse rupture modeling. Rupture Generator, a forward modeling tool, additionally employs different scaling laws and slip shape functions to construct physically reasonable source models using basic seismic information only (magnitude and epicenter location). GITEWS runs a library of semi- and fully-synthetic scenarios to be extensively employed by system testing as well as by warning center personnel teaching and training. Near real-time GPS observations are a very valuable complement to the local tsunami warning system. Their inversion provides quick (within a few minutes on an event) estimation of the earthquake magnitude, rupture position and, in case of sufficient station coverage, details of slip distribution.
[Research on the stability of teaching robots of rotation-traction manipulation].
Feng, Min-Shan; Zhu, Li-Guo; Wang, Shang-Quan; Yu, Jie; Chen, Ming; Li, Ling-Hui; Wei, Xu
2017-03-25
To evaluate the stability of teaching robot of rotation-traction manipulation. Operators were required to get the hang of rotation-traction manipulation and had clinical experience for over 5 years. The examination and data processing of the ten operators in our research were collected by the teaching robot of rotation-traction manipulation. Traction, pulling force, maximum force, pulling time, rotational amplitude and pitch range were recorded and compared for five times(G1, G2, G3, G4 and G5). The qualification rates were analyzed to evaluate the stability of teaching robot of rotation-traction manipulation. Nonconforming items were found in G1 and G2, for instance, pulling force( P =0.074), maximum force( P =0.264) and rotational amplitude ( P =0.531). There was no statistically difference. None nonconforming item was found in G3, G4 and G5. All data were processed by SPSS and One-way ANOVA was used to analysis. Pulling force was found statistically different in G1, compared with G4 and G5( P =0.015, P =0.006). Maximum force was found statistically different in G1, compared with G4 and G5 ( P =0.021, P =0.012). None differences were found in other comparisons ( P >0.05). The teaching robot of rotation-traction manipulation used in our research could provide objective and quantitative indices and was considered to be an effective tool of assessing the rotation-traction manipulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.
2013-08-01
Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inchestypically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inchmore » opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.« less
BLIPPED (BLIpped Pure Phase EncoDing) high resolution MRI with low amplitude gradients
NASA Astrophysics Data System (ADS)
Xiao, Dan; Balcom, Bruce J.
2017-12-01
MRI image resolution is proportional to the maximum k-space value, i.e. the temporal integral of the magnetic field gradient. High resolution imaging usually requires high gradient amplitudes and/or long spatial encoding times. Special gradient hardware is often required for high amplitudes and fast switching. We propose a high resolution imaging sequence that employs low amplitude gradients. This method was inspired by the previously proposed PEPI (π Echo Planar Imaging) sequence, which replaced EPI gradient reversals with multiple RF refocusing pulses. It has been shown that when the refocusing RF pulse is of high quality, i.e. sufficiently close to 180°, the magnetization phase introduced by the spatial encoding magnetic field gradient can be preserved and transferred to the following echo signal without phase rewinding. This phase encoding scheme requires blipped gradients that are identical for each echo, with low and constant amplitude, providing opportunities for high resolution imaging. We now extend the sequence to 3D pure phase encoding with low amplitude gradients. The method is compared with the Hybrid-SESPI (Spin Echo Single Point Imaging) technique to demonstrate the advantages in terms of low gradient duty cycle, compensation of concomitant magnetic field effects and minimal echo spacing, which lead to superior image quality and high resolution. The 3D imaging method was then applied with a parallel plate resonator RF probe, achieving a nominal spatial resolution of 17 μm in one dimension in the 3D image, requiring a maximum gradient amplitude of only 5.8 Gauss/cm.
Code of Federal Regulations, 2010 CFR
2010-07-01
... current and include enough qualified sources to ensure maximum open and free competition. Recipients must... transactions in a manner providing maximum full and open competition. (a) Restrictions on competition... bonding requirements; (3) Noncompetitive pricing practices between firms or between affiliated companies...
[Estimation of Maximum Entrance Skin Dose during Cerebral Angiography].
Kawauchi, Satoru; Moritake, Takashi; Hayakawa, Mikito; Hamada, Yusuke; Sakuma, Hideyuki; Yoda, Shogo; Satoh, Masayuki; Sun, Lue; Koguchi, Yasuhiro; Akahane, Keiichi; Chida, Koichi; Matsumaru, Yuji
2015-09-01
Using radio-photoluminescence glass dosimeter, we measured the entrance skin dose (ESD) in 46 cases and analyzed the correlations between maximum ESD and angiographic parameters [total fluoroscopic time (TFT); number of digital subtraction angiography (DSA) frames, air kerma at the interventional reference point (AK), and dose-area product (DAP)] to estimate the maximum ESD in real time. Mean (± standard deviation) maximum ESD, dose of the right lens, and dose of the left lens were 431.2 ± 135.8 mGy, 33.6 ± 15.5 mGy, and 58.5 ± 35.0 mGy, respectively. Correlation coefficients (r) between maximum ESD and TFT, number of DSA frames, AK, and DAP were r=0.379 (P<0.01), r=0.702 (P<0.001), r=0.825 (P<0.001), and r=0.709 (P<0.001), respectively. AK was identified as the most useful parameter for real-time prediction of maximum ESD. This study should contribute to the development of new diagnostic reference levels in our country.
Optimal Diabatic Dynamics of Majoarana-based Topological Qubits
NASA Astrophysics Data System (ADS)
Seradjeh, Babak; Rahmani, Armin; Franz, Marcel
In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles such as Majorana zero modes and are protected from local environmental perturbations. This scheme requires slow operations. By using the Pontryagin's maximum principle, here we show the same quantum gates can be implemented in much shorter times through optimal diabatic pulses. While our fast diabatic gates no not enjoy topological protection, they provide significant practical advantages due to their optimal speed and remarkable robustness to calibration errors and noise. NSERC, CIfAR, NSF DMR- 1350663, BSF 2014345.
Natural and Induced Environment in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Kim, Myung-Hee Y.; Clowdsley, Martha S.; Heinbockel, John H.; Cucinotta, Francis A.; Badhwar, Gautam D.; Atwell, William; Huston, Stuart L.
2002-01-01
The long-term exposure of astronauts on the developing International Space Station (ISS) requires an accurate knowledge of the internal exposure environment for human risk assessment and other onboard processes. The natural environment is moderated by the solar wind which varies over the solar cycle. The neutron environment within the Shuttle in low Earth orbit has two sources. A time dependent model for the ambient environment is used to evaluate the natural and induced environment. The induced neutron environment is evaluated using measurements on STS-31 and STS-36 near the 1990 solar maximum.
Magnetic refrigeration using flux compression in superconductors
NASA Technical Reports Server (NTRS)
Israelsson, U. E.; Strayer, D. M.; Jackson, H. W.; Petrac, D.
1990-01-01
The feasibility of using flux compression in high-temperature superconductors to produce the large time-varying magnetic fields required in a field cycled magnetic refrigerator operating between 20 K and 4 K is presently investigated. This paper describes the refrigerator concept and lists limitations and advantages in comparison with conventional refrigeration techniques. The maximum fields obtainable by flux compression in high-temperature supercoductor materials, as presently prepared, are too low to serve in such a refrigerator. However, reports exist of critical current values that are near usable levels for flux pumps in refrigerator applications.
A study of optical scattering methods in laboratory plasma diagnosis
NASA Technical Reports Server (NTRS)
Phipps, C. R., Jr.
1972-01-01
Electron velocity distributions are deduced along axes parallel and perpendicular to the magnetic field in a pulsed, linear Penning discharge in hydrogen by means of a laser Thomson scattering experiment. Results obtained are numerical averages of many individual measurements made at specific space-time points in the plasma evolution. Because of the high resolution in k-space and the relatively low maximum electron density 2 x 10 to the 13th power/cu cm, special techniques were required to obtain measurable scattering signals. These techniques are discussed and experimental results are presented.
Design and Implementation of the PALM-3000 Real-Time Control System
NASA Technical Reports Server (NTRS)
Truong, Tuan N.; Bouchez, Antonin H.; Burruss, Rick S.; Dekany, Richard G.; Guiwits, Stephen R.; Roberts, Jennifer E.; Shelton, Jean C.; Troy, Mitchell
2012-01-01
This paper reflects, from a computational perspective, on the experience gathered in designing and implementing realtime control of the PALM-3000 adaptive optics system currently in operation at the Palomar Observatory. We review the algorithms that serve as functional requirements driving the architecture developed, and describe key design issues and solutions that contributed to the system's low compute-latency. Additionally, we describe an implementation of dense matrix-vector-multiplication for wavefront reconstruction that exceeds 95% of the maximum sustained achievable bandwidth on NVIDIA Geforce 8800GTX GPU.
Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L
2014-01-01
The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
Atmospheric cloud physics laboratory project study
NASA Technical Reports Server (NTRS)
Schultz, W. E.; Stephen, L. A.; Usher, L. H.
1976-01-01
Engineering studies were performed for the Zero-G Cloud Physics Experiment liquid cooling and air pressure control systems. A total of four concepts for the liquid cooling system was evaluated, two of which were found to closely approach the systems requirements. Thermal insulation requirements, system hardware, and control sensor locations were established. The reservoir sizes and initial temperatures were defined as well as system power requirements. In the study of the pressure control system, fluid analyses by the Atmospheric Cloud Physics Laboratory were performed to determine flow characteristics of various orifice sizes, vacuum pump adequacy, and control systems performance. System parameters predicted in these analyses as a function of time include the following for various orifice sizes: (1) chamber and vacuum pump mass flow rates, (2) the number of valve openings or closures, (3) the maximum cloud chamber pressure deviation from the allowable, and (4) cloud chamber and accumulator pressure.
Heliocentric phasing performance of electric sail spacecraft
NASA Astrophysics Data System (ADS)
Mengali, Giovanni; Quarta, Alessandro A.; Aliasi, Generoso
2016-10-01
We investigate the heliocentric in-orbit repositioning problem of a spacecraft propelled by an Electric Solar Wind Sail. Given an initial circular parking orbit, we look for the heliocentric trajectory that minimizes the time required for the spacecraft to change its azimuthal position, along the initial orbit, of a (prescribed) phasing angle. The in-orbit repositioning problem can be solved using either a drift ahead or a drift behind maneuver and, in general, the flight times for the two cases are different for a given value of the phasing angle. However, there exists a critical azimuthal position, whose value is numerically found, which univocally establishes whether a drift ahead or behind trajectory is superior in terms of flight time it requires for the maneuver to be completed. We solve the optimization problem using an indirect approach for different values of both the spacecraft maximum propulsive acceleration and the phasing angle, and the solution is then specialized to a repositioning problem along the Earth's heliocentric orbit. Finally, we use the simulation results to obtain a first order estimate of the minimum flight times for a scientific mission towards triangular Lagrangian points of the Sun-[Earth+Moon] system.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Flint, L.E.; Flint, A.L.
2008-01-01
Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Automated Escape Guidance Algorithms for An Escape Vehicle
NASA Technical Reports Server (NTRS)
Flanary, Ronald; Hammen, David; Ito, Daigoro; Rabalais, Bruce; Rishikof, Brian; Siebold, Karl
2002-01-01
An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The fust separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The fust challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver properly requires knowledge of the orbital state, which is obtained during the navigation state reconstruction event. Since the direction of the delta-v of the SEPI maneuver is a random variable with respect to the Local Vertical Local Horizontal (LVLH) coordinate system, calculating the required SEP2 burn is a challenge. This problem was solved using a neural network as a model-free function approximation technique.
NASA Astrophysics Data System (ADS)
Alsing, Justin; Silva, Hector O.; Berti, Emanuele
2018-07-01
We infer the mass distribution of neutron stars in binary systems using a flexible Gaussian mixture model and use Bayesian model selection to explore evidence for multimodality and a sharp cut-off in the mass distribution. We find overwhelming evidence for a bimodal distribution, in agreement with previous literature, and report for the first time positive evidence for a sharp cut-off at a maximum neutron star mass. We measure the maximum mass to be 2.0 M⊙ < mmax < 2.2 M⊙ (68 per cent), 2.0 M⊙ < mmax < 2.6 M⊙ (90 per cent), and evidence for a cut-off is robust against the choice of model for the mass distribution and to removing the most extreme (highest mass) neutron stars from the data set. If this sharp cut-off is interpreted as the maximum stable neutron star mass allowed by the equation of state of dense matter, our measurement puts constraints on the equation of state. For a set of realistic equations of state that support >2 M⊙ neutron stars, our inference of mmax is able to distinguish between models at odds ratios of up to 12:1, whilst under a flexible piecewise polytropic equation-of-state model our maximum mass measurement improves constraints on the pressure at 3-7× the nuclear saturation density by ˜ 30-50 per cent compared to simply requiring mmax > 2 M⊙. We obtain a lower bound on the maximum sound speed attained inside the neutron star of c_ s^max > 0.63c (99.8 per cent), ruling out c_ s^max < c/√{3} at high significance. Our constraints on the maximum neutron star mass strengthen the case for neutron star-neutron star mergers as the primary source of short gamma-ray bursts.
Estimating the maximum potential revenue for grid connected electricity storage :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participatingmore » in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.« less
NASA Astrophysics Data System (ADS)
Nelke, M.; Selker, J. S.; Udell, C.
2017-12-01
Reliable automatic water samplers allow repetitive sampling of various water sources over long periods of time without requiring a researcher on site, reducing human error as well as the monetary and time costs of traveling to the field, particularly when the scale of the sample period is hours or days. The high fixed cost of buying a commercial sampler with little customizability can be a barrier to research requiring repetitive samples, such as the analysis of septic water pre- and post-treatment. DIY automatic samplers proposed in the past sacrifice maximum volume, customizability, or scope of applications, among other features, in exchange for a lower net cost. The purpose of this project was to develop a low-cost, highly customizable, robust water sampler that is capable of sampling many sources of water for various analytes. A lightweight aluminum-extrusion frame was designed and assembled, chosen for its mounting system, strength, and low cost. Water is drawn from two peristaltic pumps through silicone tubing and directed into 24 foil-lined 250mL bags using solenoid valves. A programmable Arduino Uno microcontroller connected to a circuit board communicates with a battery operated real-time clock, initiating sampling stages. Period and volume settings are programmable in-field by the user via serial commands. The OPEnSampler is an open design, allowing the user to decide what components to use and the modular theme of the frame allows fast mounting of new manufactured or 3D printed components. The 24-bag system weighs less than 10kg and the material cost is under $450. Up to 6L of sample water can be drawn at a rate of 100mL/minute in either direction. Faster flowrates are achieved by using more powerful peristaltic pumps. Future design changes could allow a greater maximum volume by filling the unused space with more containers and adding GSM communications to send real time status information.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... [Docket No. NHTSA-2012-0131; Notice 1] RIN 2127-AL16 Civil Penalties AGENCY: National Highway Traffic... proposes to increase the maximum civil penalty amounts for violations of motor vehicle safety requirements... and consumer information provisions. Specifically, this proposes increases in maximum civil penalty...
NASA Astrophysics Data System (ADS)
Brenders, A. J.; Pratt, R. G.
2007-01-01
We provide a series of numerical experiments designed to test waveform tomography under (i) a reduction in the number of input data frequency components (`efficient' waveform tomography), (ii) sparse spatial subsampling of the input data and (iii) an increase in the minimum data frequency used. These results extend the waveform tomography results of a companion paper, using the same third-party, 2-D, wide-angle, synthetic viscoelastic seismic data, computed in a crustal geology model 250 km long and 40 km deep, with heterogeneous P-velocity, S-velocity, density and Q-factor structure. Accurate velocity models were obtained using efficient waveform tomography and only four carefully selected frequency components of the input data: 0.8, 1.7, 3.6 and 7.0 Hz. This strategy avoids the spectral redundancy present in `full' waveform tomography, and yields results that are comparable with those in the companion paper for an 88 per cent decrease in total computational cost. Because we use acoustic waveform tomography, the results further justify the use of the acoustic wave equation in calculating P-wave velocity models from viscoelastic data. The effect of using sparse survey geometries with efficient waveform tomography were investigated for both increased receiver spacing, and increased source spacing. Sampling theory formally requires spatial sampling at maximum interval of one half-wavelength (2.5 km at 0.8 Hz): For data with receivers every 0.9 km (conforming to this criterion), artefacts in the tomographic images were still minimal when the source spacing was as large as 7.6 km (three times the theoretical maximum). Larger source spacings led to an unacceptable degradation of the results. When increasing the starting frequency, image quality was progressively degraded. Acceptable image quality within the central portion of the model was nevertheless achieved using starting frequencies up to 3.0 Hz. At 3.0 Hz the maximum theoretical sample interval is reduced to 0.67 km due to the decreased wavelengths; the available sources were spaced every 5.0 km (more than seven times the theoretical maximum), and receivers were spaced every 0.9 km (1.3 times the theoretical maximum). Higher starting frequencies than 3.0 Hz again led to unacceptable degradation of the results.
Wanyonyi, Kristina L; Radford, David R; Harper, Paul R; Gallagher, Jennifer E
2015-09-15
In primary care dentistry, strategies to reconfigure the traditional boundaries of various dental professional groups by task sharing and role substitution have been encouraged in order to meet changing oral health needs. The aim of this research was to investigate the potential for skill mix use in primary dental care in England based on the undergraduate training experience in a primary care team training centre for dentists and mid-level dental providers. An operational research model and four alternative scenarios to test the potential for skill mix use in primary care in England were developed, informed by the model of care at a primary dental care training centre in the south of England, professional policy including scope of practice and contemporary evidence-based preventative practice. The model was developed in Excel and drew on published national timings and salary costs. The scenarios included the following: "No Skill Mix", "Minimal Direct Access", "More Prevention" and "Maximum Delegation". The scenario outputs comprised clinical time, workforce numbers and salary costs required for state-funded primary dental care in England. The operational research model suggested that 73% of clinical time in England's state-funded primary dental care in 2011/12 was spent on tasks that may be delegated to dental care professionals (DCPs), and 45- to 54-year-old patients received the most clinical time overall. Using estimated National Health Service (NHS) clinical working patterns, the model suggested alternative NHS workforce numbers and salary costs to meet the dental demand based on each developed scenario. For scenario 1:"No Skill Mix", the dentist-only scenario, 81% of the dentists currently registered in England would be required to participate. In scenario 2: "Minimal Direct Access", where 70% of examinations were delegated and the primary care training centre delegation patterns for other treatments were practised, 40% of registered dentists and eight times the number of dental therapists currently registered would be required; this would save 38% of current salary costs cf. "No Skill Mix". Scenario 3: "More Prevention", that is, the current model with no direct access and increasing fluoride varnish from 13.1% to 50% and maintaining the same model of delegation as scenario 2 for other care, would require 57% of registered dentists and 4.7 times the number of dental therapists. It would achieve a 1% salary cost saving cf. "No Skill Mix". Scenario 4 "Maximum Delegation" where all care within dental therapists' jurisdiction is delegated at 100%, together with 50% of restorations and radiographs, suggested that only 30% of registered dentists would be required and 10 times the number of dental therapists registered; this scenario would achieve a 52% salary cost saving cf. "No Skill Mix". Alternative scenarios based on wider expressed treatment need in national primary dental care in England, changing regulations on the scope of practice and increased evidence-based preventive practice suggest that the majority of care in primary dental practice may be delegated to dental therapists, and there is potential time and salary cost saving if the majority of diagnostic tasks and prevention are delegated. However, this would require an increase in trained DCPs, including role enhancement, as part of rebalancing the dental workforce.
Farzandipour, Mehrdad; Meidani, Zahra; Riazi, Hossein; Sadeqi Jabali, Monireh
2016-12-01
Considering the integral role of understanding users' requirements in information system success, this research aimed to determine functional requirements of nursing information systems through a national survey. Delphi technique method was applied to conduct this study through three phases: focus group method modified Delphi technique and classic Delphi technique. A cross-sectional study was conducted to evaluate the proposed requirements within 15 general hospitals in Iran. Forty-three of 76 approved requirements were clinical, and 33 were administrative ones. Nurses' mean agreements for clinical requirements were higher than those of administrative requirements; minimum and maximum means of clinical requirements were 3.3 and 3.88, respectively. Minimum and maximum means of administrative requirements were 3.1 and 3.47, respectively. Research findings indicated that those information system requirements that support nurses in doing tasks including direct care, medicine prescription, patient treatment management, and patient safety have been the target of special attention. As nurses' requirements deal directly with patient outcome and patient safety, nursing information systems requirements should not only address automation but also nurses' tasks and work processes based on work analysis.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
A time dependent numerical solution of the linearized continuity and momentum equation was developed for sound propagation in a two dimensional straight hard or soft wall duct with a sheared mean flow. The time dependent governing acoustic difference equations and boundary conditions were developed along with a numerical determination of the maximum stable time increments. A harmonic noise source radiating into a quiescent duct was analyzed. This explicit iteration method then calculated stepwise in real time to obtain the transient as well as the steady state solution of the acoustic field. Example calculations were presented for sound propagation in hard and soft wall ducts, with no flow and plug flow. Although the problem with sheared flow was formulated and programmed, sample calculations were not examined. The time dependent finite difference analysis was found to be superior to the steady state finite difference and finite element techniques because of shorter solution times and the elimination of large matrix storage requirements.
Leung, Joseph; Mann, Surinder; Siao-Salera, Rodelei; Ransibrahmanakul, Kanat; Lim, Brian; Canete, Wilhelmina; Samson, Laramie; Gutierrez, Rebeck; Leung, Felix W
2011-01-01
Sedation for colonoscopy discomfort imposes a recovery-time burden on patients. The water method permitted 52% of patients accepting on-demand sedation to complete colonoscopy without sedation. On-site and at-home recovery times were not reported. To confirm the beneficial effect of the water method and document the patient recovery-time burden. Randomized, controlled trial, with single-blinded, intent-to-treat analysis. Veterans Affairs outpatient endoscopy unit. This study involved veterans accepting on-demand sedation for screening and surveillance colonoscopy. Air versus water method for colonoscope insertion. Proportion of patients completing colonoscopy without sedation, cecal intubation rate, medication requirement, maximum discomfort (0 = none, 10 = severe), procedure-related and patient-related outcomes. One hundred veterans were randomized to the air (n = 50) or water (n = 50) method. The proportions of patients who could complete colonoscopy without sedation in the water group (78%) and the air group (54%) were significantly different (P = .011, Fisher exact test), but the cecal intubation rate was similar (100% in both groups). Secondary analysis (data as Mean [SD]) shows that the water method produced a reduction in medication requirement: fentanyl, 12.5 (26.8) μg versus 24.0 (30.7) μg; midazolam, 0.5 (1.1) mg versus 0.94 (1.20) mg; maximum discomfort, 2.3 (1.7) versus 4.9 (2.0); recovery time on site, 8.4 (6.8) versus 12.3 (9.4) minutes; and recovery time at home, 4.5 (9.2) versus 10.9 (14.0) hours (P = .049; P = .06; P = .0012; P = .0199; and P = .0048, respectively, t test). Single Veterans Affairs site, predominantly male population, unblinded examiners. This randomized, controlled trial confirms the reported beneficial effects of the water method. The combination of the water method with on-demand sedation minimizes the patient recovery-time burden. ( NCT00920751.). Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Potvin, Jean; Goldbogen, Jeremy A; Shadwick, Robert E
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey items to filter feeding on prey aggregations.
Potvin, Jean; Goldbogen, Jeremy A.; Shadwick, Robert E.
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half . These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey items to filter feeding on prey aggregations. PMID:23024769
Usability of prostaglandin monotherapy eye droppers.
Drew, Tom; Wolffsohn, James S
2015-09-01
To determine the force needed to extract a drop from a range of current prostaglandin monotherapy eye droppers and how this related to the comfortable and maximum pressure subjects could exert. The comfortable and maximum pressure subjects could apply to an eye dropper constructed around a set of cantilevered pressure sensors and mounted above their eye was assessed in 102 subjects (mean 51.2±18.7 years), repeated three times. A load cell amplifier, mounted on a stepper motor controlled linear slide, was constructed and calibrated to test the force required to extract the first three drops from 13 multidose or unidose latanoprost medication eye droppers. The pressure that could be exerted on a dropper comfortably (25.9±17.7 Newtons, range 1.2-87.4) could be exceeded with effort (to 64.8±27.1 Newtons, range 19.9-157.8; F=19.045, p<0.001), and did not differ between repeats (F=0.609, p=0.545). Comfortable and maximum pressures exerted were correlated (r=0.618, p<0.001), neither were influenced strongly by age (r=0.138, p=0.168; r=-0.118, p=0237, respectively), but were lower in women than in men (F=12.757, p=0.001). The force required to expel a drop differed between dropper designs (F=22.528, p<0.001), ranging from 6.4 Newtons to 23.4 Newtons. The force needed to exert successive drops increased (F=36.373, p<0.001) and storing droppers in the fridge further increased the force required (F=7.987, p=0.009). Prostaglandin monotherapy droppers for glaucoma treatment vary in their resistance to extract a drop and with some a drop could not be comfortably achieved by half the population, which may affect compliance and efficacy. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Chantre, Guillermo R.; Batlla, Diego; Sabbatini, Mario R.; Orioli, Gustavo
2009-01-01
Background and Aims Models based on thermal-time approaches have been a useful tool for characterizing and predicting seed germination and dormancy release in relation to time and temperature. The aims of the present work were to evaluate the relative accuracy of different thermal-time approaches for the description of germination in Lithospermum arvense and to develop an after-ripening thermal-time model for predicting seed dormancy release. Methods Seeds were dry-stored at constant temperatures of 5, 15 or 24 °C for up to 210 d. After different storage periods, batches of 50 seeds were incubated at eight constant temperature regimes of 5, 8, 10, 13, 15, 17, 20 or 25 °C. Experimentally obtained cumulative-germination curves were analysed using a non-linear regression procedure to obtain optimal population thermal parameters for L. arvense. Changes in these parameters were described as a function of after-ripening thermal-time and storage temperature. Key Results The most accurate approach for simulating the thermal-germination response of L. arvense was achieved by assuming a normal distribution of both base and maximum germination temperatures. The results contradict the widely accepted assumption of a single Tb value for the entire seed population. The after-ripening process was characterized by a progressive increase in the mean maximum germination temperature and a reduction in the thermal-time requirements for germination at sub-optimal temperatures. Conclusions The after-ripening thermal-time model developed here gave an acceptable description of the observed field emergence patterns, thus indicating its usefulness as a predictive tool to enhance weed management tactics. PMID:19332426
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Basler, J.A.
1983-01-01
Requirements for testing hydrologic test wells at the proposed Waste Isolation Pilot Plant near Carlsbad, New Mexico, necessitated the use of inflatable formation packers and pressure transducers. Observations during drilling and initial development indicated small formation yields which would require considerable test times by conventional open-casing methods. A pressure-monitoring system was assembled for performance evaluation utilizing commercially available components. Formation pressures were monitored with a down-hole strain-gage transducer. An inflatable packer equipped with a 1/4-inch-diameter steel tube extending through the inflation element permitted sensing formation pressures in isolated test zones. Surface components of the monitoring system provided AC transducer excitation, signal conditioning for recording directly in engineering units, and both analog and digital recording. Continuous surface monitoring of formation pressures provided a means of determining test status and projecting completion times during any phase of testing. Maximum portability was afforded by battery operation with all surface components mounted in a small self-contained trailer. (USGS)
Nakamura, N; Tanaka, M; Tsukamoto, M; Shimano, Y; Yasuhira, M; Ashikari, J
Kidneys from non-heart-beating donors are thought to be marginal, and careful evaluation is required. Mass analyzed data are limited, and each transplant surgeon must evaluate these organs on the basis of their own experience. We analyzed the data of 589 kidneys used for kidney transplantation from 304 non-heart-beating donors from January 2002 through December 2013 at the Japan Organ Transplant Network West Japan Division. The age of the donors, cause of death, and total ischemic time of more than 24 hours were factors that influenced the graft survival of the organs. On the other hand, the final serum creatinine level before donation (maximum, 12.4 mg/dL), the presence and duration of anuria (maximum, 92 hours), and the presence of cannulation did not influence the graft survival rate. In multivariate analysis of Cox proportional hazards regression analysis, graft survival was significantly related to the age of the donor (over 70 years of age), cause of death (atherosclerotic disease), and total ischemic time of more than 24 hours. Copyright © 2016 Elsevier Inc. All rights reserved.
HEUS, Ronald; DENHARTOG, Emiel A.
2017-01-01
To determine safe working conditions in emergency situations at petro-chemical plants in the Netherlands a study was performed on three protective clothing combinations (operator’s, firefighter’s and aluminized). The clothing was evaluated at four different heat radiation levels (3.0, 4.6, 6.3 and 10.0 k∙W∙m−2) in standing and walking posture with a thermal manikin RadMan™. Time till pain threshold (43°C) is set as a cut-off criterion for regular activities. Operator’s clothing did not fulfil requirements to serve as protective clothing for necessary activities at heat radiation levels above 1.5 k∙W∙m−2 as was stated earlier by Den Hartog and Heus1). With firefighter’s clothing it was possible to work almost three min up to 4.6 k∙W∙m−2. At higher heat radiation levels firefighter’s clothing gave insufficient protection and aluminized clothing should be used. Maximum working times in aluminized clothing at 6.3 k∙W∙m−2 was about five min. At levels of 10.0 k∙W∙m−2 (emergency conditions) emergency responders should move immediately to lower heat radiation levels. PMID:28978903
Bhattacharya, Tinish; Gupta, Ankesh; Singh, Salam ThoiThoi; Roy, Sitikantha; Prasad, Anamika
2017-07-01
Cuff-less and non-invasive methods of Blood Pressure (BP) monitoring have faced a lot of challenges like stability, noise, motion artefact and requirement for calibration. These factors are the major reasons why such devices do not get approval from the medical community easily. One such method is calculating Blood Pressure indirectly from pulse transit time (PTT) obtained from electrocardiogram (ECG) and Photoplethysmogram (PPG). In this paper we have proposed two novel analog signal conditioning circuits for ECG and PPG that increase stability, remove motion artefacts, remove the sinusoidal wavering of the ECG baseline due to respiration and provide consistent digital pulses corresponding to blood pulses/heart-beat. We have combined these two systems to obtain the PTT and then correlated it with the Mean Arterial Pressure (MAP). The aim was to perform major part of the processing in analog domain to decrease processing load over microcontroller so as to reduce cost and make it simple and robust. We have found from our experiments that the proposed circuits can calculate the Heart Rate (HR) with a maximum error of ~3.0% and MAP with a maximum error of ~2.4% at rest and ~4.6% in motion.