DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtel Nevada
2005-09-01
A new, revised three-dimensional (3-D) hydrostratigraphic framework model for Frenchman Flat was completed in 2004. The area of interest includes Frenchman Flat, a former nuclear testing area at the Nevada Test Site, and proximal areas. Internal and external reviews of an earlier (Phase I) Frenchman Flat model recommended additional data collection to address uncertainties. Subsequently, additional data were collected for this Phase II initiative, including five new drill holes and a 3-D seismic survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnham, Irene
This Closure Report (CR) has been prepared for Corrective Action Unit (CAU) 98, Frenchman Flat, Nevada National Security Site (NNSS), Nevada. The Frenchman Flat CAU was the site of 10 underground nuclear tests, some of which have impacted groundwater near the tests. This work was performed as part of the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO) Underground Test Area (UGTA) Activity in accordance with the Federal Facility Agreement and Consent Order (FFACO). This CR describes the selected corrective action to be implemented during closure to protect human health and the environment from the impactedmore » groundwater« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregg Ruskuaff
This document, the Phase II Frenchman Flat transport report, presents the results of radionuclide transport simulations that incorporate groundwater radionuclide transport model statistical and structural uncertainty, and lead to forecasts of the contaminant boundary (CB) for a set of representative models from an ensemble of possible models. This work, as described in the Federal Facility Agreement and Consent Order (FFACO) Underground Test Area (UGTA) strategy (FFACO, 1996; amended 2010), forms an essential part of the technical basis for subsequent negotiation of the compliance boundary of the Frenchman Flat corrective action unit (CAU) by Nevada Division of Environmental Protection (NDEP) andmore » National Nuclear Security Administration Nevada Site Office (NNSA/NSO). Underground nuclear testing via deep vertical shafts was conducted at the Nevada Test Site (NTS) from 1951 until 1992. The Frenchman Flat area, the subject of this report, was used for seven years, with 10 underground nuclear tests being conducted. The U.S. Department of Energy (DOE), NNSA/NSO initiated the UGTA Project to assess and evaluate the effects of underground nuclear tests on groundwater at the NTS and vicinity through the FFACO (1996, amended 2010). The processes that will be used to complete UGTA corrective actions are described in the “Corrective Action Strategy” in the FFACO Appendix VI, Revision No. 2 (February 20, 2008).« less
External Peer Review Team Report Underground Testing Area Subproject for Frenchman Flat, Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sam Marutzky
2010-09-01
An external peer review was conducted to review the groundwater models used in the corrective action investigation stage of the Underground Test Area (UGTA) subproject to forecast zones of potential contamination in 1,000 years for the Frenchman Flat area. The goal of the external peer review was to provide technical evaluation of the studies and to assist in assessing the readiness of the UGTA subproject to progress to monitoring activities for further model evaluation. The external peer review team consisted of six independent technical experts with expertise in geology, hydrogeology,'''groundwater modeling, and radiochemistry. The peer review team was tasked withmore » addressing the following questions: 1. Are the modeling approaches, assumptions, and model results for Frenchman Flat consistent with the use of modeling studies as a decision tool for resolution of environmental and regulatory requirements? 2. Do the modeling results adequately account for uncertainty in models of flow and transport in the Frenchman Flat hydrological setting? a. Are the models of sufficient scale/resolution to adequately predict contaminant transport in the Frenchman Flat setting? b. Have all key processes been included in the model? c. Are the methods used to forecast contaminant boundaries from the transport modeling studies reasonable and appropriate? d. Are the assessments of uncertainty technically sound and consistent with state-of-the-art approaches currently used in the hydrological sciences? 3. Are the datasets and modeling results adequate for a transition to Corrective Action Unit monitoring studies—the next stage in the UGTA strategy for Frenchman Flat? The peer review team is of the opinion that, with some limitations, the modeling approaches, assumptions, and model results are consistent with the use of modeling studies for resolution of environmental and regulatory requirements. The peer review team further finds that the modeling studies have accounted for uncertainty in models of flow and transport in the Frenchman Flat except for a few deficiencies described in the report. Finally, the peer review team concludes that the UGTA subproject has explored a wide range of variations in assumptions, methods, and data, and should proceed to the next stage with an emphasis on monitoring studies. The corrective action strategy, as described in the Federal Facility Agreement and Consent Order, states that the groundwater flow and transport models for each corrective action unit will consider, at a minimum, the following: • Alternative hydrostratigraphic framework models of the modeling domain. • Uncertainty in the radiological and hydrological source terms. • Alternative models of recharge. • Alternative boundary conditions and groundwater flows. • Multiple permissive sets of calibrated flow models. • Probabilistic simulations of transport using plausible sets of alternative framework and recharge models, and boundary and groundwater flows from calibrated flow models. • Ensembles of forecasts of contaminant boundaries. • Sensitivity and uncertainty analyses of model outputs. The peer review team finds that these minimum requirements have been met. While the groundwater modeling and uncertainty analyses have been quite detailed, the peer review team has identified several modeling-related issues that should be addressed in the next phase of the corrective action activities: • Evaluating and using water-level gradients from the pilot wells at the Area 5 Radioactive Waste Management Site in model calibration. • Re-evaluating the use of geochemical age-dating data to constrain model calibrations. • Developing water budgets for the alluvial and upper volcanic aquifer systems in Frenchman Flat. • Considering modeling approaches in which calculated groundwater flow directions near the water table are not predetermined by model boundary conditions and areas of recharge, all of which are very uncertain. • Evaluating local-scale variations in hydraulic conductivity on the calculated contaminant boundaries. • Evaluating the effects of non-steady-state flow conditions on calculated contaminant boundaries, including the effects of long-term declines in water levels, climatic change, and disruption of groundwater system by potential earthquake faulting along either of the two major controlling fault zones in the flow system (the Cane Spring and Rock Valley faults). • Considering the use of less-complex modeling approaches. • Evaluating the large change in water levels in the vicinity of the Frenchman Flat playa and developing a conceptual model to explain these water-level changes. • Developing a long-term groundwater level monitoring program for Frenchman Flat with regular monitoring of water levels at key monitoring wells. Despite these reservations, the peer review team strongly believes that the UGTA subproject should proceed to the next stage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
7. VIEW OF BOOSTER STATION 3, FACING NORTHWEST Nevada ...
7. VIEW OF BOOSTER STATION 3, FACING NORTHWEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
2. VIEW OF BOOSTER STATION 1, FACING NORTHEAST Nevada ...
2. VIEW OF BOOSTER STATION 1, FACING NORTHEAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
11. VIEW OF BOOSTER STATION 4, FACING SOUTHEAST Nevada ...
11. VIEW OF BOOSTER STATION 4, FACING SOUTHEAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
10. VIEW OF BOOSTER STATION 4, FACING NORTHWEST Nevada ...
10. VIEW OF BOOSTER STATION 4, FACING NORTHWEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
1. VIEW OF BOOSTER STATION 1, FACING SOUTHWEST Nevada ...
1. VIEW OF BOOSTER STATION 1, FACING SOUTHWEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
8. VIEW OF BOOSTER STATION 3, FACING SOUTHEAST Nevada ...
8. VIEW OF BOOSTER STATION 3, FACING SOUTHEAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
4. VIEW OF BOOSTER STATION 2, FACING NORTHWEST Nevada ...
4. VIEW OF BOOSTER STATION 2, FACING NORTHWEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
5. VIEW OF BOOSTER STATION 2, FACING SOUTHEAST Nevada ...
5. VIEW OF BOOSTER STATION 2, FACING SOUTHEAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
9. VIEW OF BOOSTER STATION 3 INTERIOR, FACING NORTHEAST ...
9. VIEW OF BOOSTER STATION 3 INTERIOR, FACING NORTHEAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
6. VIEW OF BOOSTER STATION 2 INTERIOR, FACING WEST ...
6. VIEW OF BOOSTER STATION 2 INTERIOR, FACING WEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
12. VIEW OF BOOSTER STATION 4 INTERIOR, FACING SOUTHWEST ...
12. VIEW OF BOOSTER STATION 4 INTERIOR, FACING SOUTHWEST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
3. VIEW OF BOOSTER STATION 1 INTERIOR, FACING EAST ...
3. VIEW OF BOOSTER STATION 1 INTERIOR, FACING EAST - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
13. VIEW OF BOOSTER STATION 4 CHLORINATOR INTERIOR, FACING NORTH ...
13. VIEW OF BOOSTER STATION 4 CHLORINATOR INTERIOR, FACING NORTH - Nevada Test Site, Frenchman Flat Test Facility, Well Five Booster Stations, Intersection of 5-03 Road & Short Pole Line Road, Area 5, Frenchman Flat, Mercury, Nye County, NV
Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada
Phelps, Geoffrey A.; Graham, Scott E.
2002-01-01
The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.
Analysis of water levels in the Frenchman Flat area, Nevada Test Site
Bright, D.J.; Watkins, S.A.; Lisle, B.A.
2001-01-01
Analysis of water levels in 21 wells in the Frenchman Flat area, Nevada Test Site, provides information on the accuracy of hydraulic-head calculations, temporal water-level trends, and potential causes of water-level fluctuations. Accurate hydraulic heads are particularly important in Frenchman Flat where the hydraulic gradients are relatively flat (less than 1 foot per mile) in the alluvial aquifer. Temporal water-level trends with magnitudes near or exceeding the regional hydraulic gradient may have a substantial effect on ground-water flow directions. Water-level measurements can be adjusted for the effects of barometric pressure, formation water density (from water-temperature measurements), borehole deviation, and land-surface altitude in selected wells in the Frenchman Flat area. Water levels in one well were adjusted for the effect of density; this adjustment was significantly greater (about 17 feet) than the adjustment of water levels for barometric pressure, borehole deviation, or land-surface altitude (less than about 4 feet). Water-level measurements from five wells exhibited trends that were statistically and hydrologically significant. Statistically significant water-level trends were observed for three wells completed in the alluvial aquifer (WW-5a, UE-5n, and PW-3), for one well completed in the carbonate aquifer (SM-23), and for one well completed in the quartzite confining unit (Army-6a). Potential causes of water-level fluctuations in wells in the Frenchman Flat area include changes in atmospheric conditions (precipitation and barometric pressure), Earth tides, seismic activity, past underground nuclear testing, and nearby pumping. Periodic water-level measurements in some wells completed in the carbonate aquifer indicate cyclic-type water-level fluctuations that generally correlate with longer term changes (more than 5 years) in precipitation. Ground-water pumping fromthe alluvial aquifer at well WW-5c and pumping and discharge from well RNM-2s appear to cause water-level fluctuations in nearby observation wells. The remaining known sources of water-level fluctuations do not appear to substantially affect water-level changes (seismic activity and underground nuclear testing) or do not affect changes over a period of more than 1 year (barometric pressure and Earth tides) in wells in the Frenchman Flat area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irene Farnham and Sam Marutzky
2011-07-01
This CADD/CAP follows the Corrective Action Investigation (CAI) stage, which results in development of a set of contaminant boundary forecasts produced from groundwater flow and contaminant transport modeling of the Frenchman Flat CAU. The Frenchman Flat CAU is located in the southeastern portion of the NNSS and comprises 10 underground nuclear tests. The tests were conducted between 1965 and 1971 and resulted in the release of radionuclides in the subsurface in the vicinity of the test cavities. Two important aspects of the corrective action process are presented within this CADD/CAP. The CADD portion describes the results of the Frenchman Flatmore » CAU data-collection and modeling activities completed during the CAI stage. The corrective action objectives and the actions recommended to meet the objectives are also described. The CAP portion describes the corrective action implementation plan. The CAP begins with the presentation of CAU regulatory boundary objectives and initial use restriction boundaries that are identified and negotiated by NNSA/NSO and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The first two stages of the strategy have been completed for the Frenchman Flat CAU. A value of information analysis and a CAIP were developed during the CAIP stage. During the CAI stage, a CAIP addendum was developed, and the activities proposed in the CAIP and addendum were completed. These activities included hydrogeologic investigation of the underground testing areas, aquifer testing, isotopic and geochemistry-based investigations, and integrated geophysical investigations. After these investigations, a groundwater flow and contaminant transport model was developed to forecast contaminant boundaries that enclose areas potentially exceeding the Safe Drinking Water Act radiological standards at any time within 1,000 years. An external peer review of the groundwater flow and contaminant transport model was completed, and the model was accepted by NDEP to allow advancement to the CADD/CAP stage. The CADD/CAP stage focuses on model evaluation to ensure that existing models provide adequate guidance for the regulatory decisions regarding monitoring and institutional controls. Data-collection activities are identified and implemented to address key uncertainties in the flow and contaminant transport models. During the CR stage, final use restriction boundaries and CAU regulatory boundaries are negotiated and established; a long-term closure monitoring program is developed and implemented; and the approaches and policies for institutional controls are initiated. The model evaluation process described in this plan consists of an iterative series of five steps designed to build confidence in the site conceptual model and model forecasts. These steps are designed to identify data-collection activities (Step 1), document the data-collection activities in the 0CADD/CAP (Step 2), and perform the activities (Step 3). The new data are then assessed; the model is refined, if necessary; the modeling results are evaluated; and a model evaluation report is prepared (Step 4). The assessments are made by the modeling team and presented to the pre-emptive review committee. The decision is made by the modeling team with the assistance of the pre-emptive review committee and concurrence of NNSA/NSO to continue data and model assessment/refinement, recommend additional data collection, or recommend advancing to the CR stage. A recommendation to advance to the CR stage is based on whether the model is considered to be sufficiently reliable for designing a monitoring system and developing effective institutional controls. The decision to advance to the CR stage or to return to step 1 of the process is then made by NDEP (Step 5).« less
Magnetotelluric Data, Northern Frenchman Flat, Nevada Test Site Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T. H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Frenchman Flat Profile 3, as shown in Figure 1. No interpretation of the data is included here.« less
Phillips, Jeffrey D.; Burton, Bethany L.; Curry-Elrod, Erika; Drellack, Sigmund
2014-01-01
Question 2—Does basin and range normal faulting observed in the hills north of Frenchman Flat continue southward under alluvium and possibly disrupt the Topopah Spring Tuff of the Paintbrush Group (the Topopah Spring welded tuff aquifer or TSA) east of the Pin Stripe underground nuclear test, which was conducted in Emplacement hole U11b?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruskauff, Greg; Marutzky, Sam
Model evaluation focused solely on the PIN STRIPE and MILK SHAKE underground nuclear tests’ contaminant boundaries (CBs) because they had the largest extent, uncertainty, and potential consequences. The CAMBRIC radionuclide migration experiment also had a relatively large CB, but because it was constrained by transport data (notably Well UE-5n), there was little uncertainty, and radioactive decay reduced concentrations before much migration could occur. Each evaluation target and the associated data-collection activity were assessed in turn to determine whether the new data support, or demonstrate conservatism of, the CB forecasts. The modeling team—in this case, the same team that developed themore » Frenchman Flat geologic, source term, and groundwater flow and transport models—analyzed the new data and presented the results to a PER committee. Existing site understanding and its representation in numerical groundwater flow and transport models was evaluated in light of the new data and the ability to proceed to the CR stage of long-term monitoring and institutional control.« less
Lessons Learned from the Frenchman Flat Flow and Transport Modeling External Peer Review
NASA Astrophysics Data System (ADS)
Becker, N. M.; Crowe, B. M.; Ruskauff, G.; Kwicklis, E. M.; Wilborn, B.
2011-12-01
The objective of the U.S. Department of Energy's Underground Test Area Program program is to forecast, using computer modeling, the contaminant boundary of radionuclide transport in groundwater at the Nevada National Security Site that exceeds the Safe Drinking Water Act after 1000 yrs. This objective is defined within the Federal Facilities Agreement and Consent Order between the Department of Energy, Department of Defense and State of Nevada Division of Environmental Protection . At one of the Corrective Action Units, Frenchman Flat, a Phase I flow and transport model underwent peer review in 1999 to determine if the model approach, assumptions and results adequate to be used as a decision tool as a basis to negotiate a compliance boundary with Nevada Division of Environmental Protection. The external peer review decision was that the model was not fully tested under a full suite of possible conceptual models, including boundary conditions, flow mechanisms, other transport processes, hydrological framework models, sensitivity and uncertainty analysis, etc. The program went back to collect more data, expand modeling to consider other alternatives that were not adequately tested, and conduct sensitivity and uncertainty analysis. A second external peer review was held in August 2010. Their conclusion that the new Frenchman Flat flow and transport modeling analysis were adequate as a decision tool and that the model was ready to advance to the next step in the Federal Facilities Agreement and Consent Order strategy. We will discuss the processes to changing the modeling that occurred between the first and second peer reviews, and then present the second peer review general comments. Finally, we present the lessons learned from the total model acceptance process required for federal regulatory compliance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehfeldt, Ken; Haight, Brian
Corrective Action Unit (CAU) 98: Frenchman Flat on the Nevada National Security Site was the location of 10 underground nuclear tests. CAU 98 underwent a series of investigations and actions in accordance with the Federal Facility Agreement and Consent Order to assess contamination of groundwater by radionuclides from the tests. A Closure Report completed that process in 2016 and called for long-term monitoring, use restrictions (URs), and institutional controls to protect the public and environment from potential exposure to contaminated groundwater. Three types of monitoring are performed for CAU 98: water quality, water level, and institutional control. These are monitoredmore » to determine whether the URs remain protective of human health and the environment, and to ensure that the regulatory boundary objectives are being met. Monitoring data will be used in the future, once multiple years of data are available, to evaluate consistency with the groundwater flow and contaminant transport models because the contaminant boundaries calculated with the models are the primary basis of the UR boundaries. Six wells were sampled for water-quality monitoring in 2017. Contaminants of concern were detected only in the two source/plume wells already known to contain contamination as a result of a radionuclide migration experiment. The 86,000-picocuries-per-liter (pCi/L) tritium concentration in one of the wells is about 12 percent higher than measured in 2016 but is over an order of magnitude less than the peak value measured in the well in 1980. The concentration in the other source/plume well is lower than measured in 2016. The water-level monitoring network includes 16 wells. Depth to water measured in 2017 is generally consistent with recent measurements for most wells. Water-level declines differing from long-term trends were observed in four wells. Three of these (WW-4, WW-4A, and WW-5B) are water-supply wells that experienced increases in pumping during the year. No definitive cause for the sharp decline in the fourth well (ER-5-3-2) in 2016 is known as yet. Institutional control monitoring confirmed the URs are recorded in U.S. Department of Energy and U.S. Air Force land management systems, and that no activities within Frenchman Flat basin are occurring that could potentially affect the contaminant boundaries. Survey of groundwater resources in basins surrounding Frenchman Flat similarly identify no current or pending development that would indicate the need to increase monitoring activities or would otherwise cause concern for the closure decision. The URs continue to prevent exposure of the public, workers, and the environment to contaminants of concern by preventing use of potentially contaminated groundwater.« less
Phelps, Geoffrey A.; Justet, Leigh; Moring, Barry C.; Roberts, Carter W.
2006-01-01
New gravity and magnetic data collected in the vicinity of Massachusetts Mountain and CP basin (Nevada Test Site, NV) provides a more complex view of the structural relationships present in the vicinity of CP basin than previous geologic models, helps define the position and extent of structures in southern Yucca Flat and CP basin, and better constrains the configuration of the basement structure separating CP basin and Frenchman Flat. The density and gravity modeling indicates that CP basin is a shallow, oval-shaped basin which trends north-northeast and contains ~800 m of basin-filling rocks and sediment at its deepest point in the northeast. CP basin is separated from the deeper Frenchman Flat basin by a subsurface ridge that may represent a Tertiary erosion surface at the top of the Paleozoic strata. The magnetic modeling indicates that the Cane Spring fault appears to merge with faults in northwest Massachusetts Mountain, rather than cut through to Yucca Flat basin and that the basin is downed-dropped relative to Massachusetts Mountain. The magnetic modeling indicates volcanic units within Yucca Flat basin are down-dropped on the west and supports the interpretations of Phelps and KcKee (1999). The magnetic data indicate that the only faults that appear to be through-going from Yucca Flat into either Frenchman Flat or CP basin are the faults that bound the CP hogback. In general, the north-trending faults present along the length of Yucca Flat bend, merge, and disappear before reaching CP hogback and Massachusetts Mountain or French Peak.
Handbook: Collecting Groundwater Samples from Monitoring Wells in Frenchman Flat, CAU 98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Jenny; Lyles, Brad; Cooper, Clay
Frenchman Flat basin on the Nevada National Security Site (NNSS) contains Corrective Action Unit (CAU) 98, which is comprised of ten underground nuclear test locations. Environmental management of these test locations is part of the Underground Test Area (UGTA) Activity conducted by the U.S. Department of Energy (DOE) under the Federal Facility Agreement and Consent Order (FFACO) (1996, as amended) with the U.S. Department of Defense (DOD) and the State of Nevada. A Corrective Action Decision Document (CADD)/Corrective Action Plan (CAP) has been approved for CAU 98 (DOE, 2011). The CADD/CAP reports on the Corrective Action Investigation that was conductedmore » for the CAU, which included characterization and modeling. It also presents the recommended corrective actions to address the objective of protecting human health and the environment. The recommended corrective action alternative is “Closure in Place with Modeling, Monitoring, and Institutional Controls.” The role of monitoring is to verify that Contaminants of Concern (COCs) have not exceeded the Safe Drinking Water Act (SDWA) limits (Code of Federal Regulations, 2014) at the regulatory boundary, to ensure that institutional controls are adequate, and to monitor for changed conditions that could affect the closure conditions. The long-term closure monitoring program will be planned and implemented as part of the Closure Report stage after activities specified in the CADD/CAP are complete. Groundwater at the NNSS has been monitored for decades through a variety of programs. Current activities were recently consolidated in an NNSS Integrated Sampling Plan (DOE, 2014). Although monitoring directed by the plan is not intended to meet the FFACO long-term monitoring requirements for a CAU (which will be defined in the Closure Report), the objective to ensure public health protection is similar. It is expected that data collected in accordance with the plan will support the transition to long-term monitoring at each CAU. The sampling plan is designed to ensure that monitoring activities occur in compliance with the UGTA Quality Assurance Plan (DOE, 2012). The sampling plan should be referenced for Quality Assurance (QA) elements and procedures governing sampling activities. The NNSS Integrated Sampling Plan specifies the groundwater monitoring that will occur in CAU 98 until the long-term monitoring program is approved in the Closure Report. The plan specifies the wells that must be monitored and categorizes them by their sampling objective with the associated analytical requirements and frequency. Possible sample collection methods and required standard operating procedures are also presented. The intent of this handbook is to augment the NNSS Integrated Sampling Plan by providing well-specific details for the sampling professional implementing the Sampling Plan in CAU 98, Frenchman Flat. This handbook includes each CAU 98 well designated for sampling in the NNSS Integrated Sampling Plan. The following information is provided in the individual well sections: 1. The purpose of sampling. 2. A physical description of the well. 3. The chemical characteristics of the formation water. 4. Recommended protocols for purging and sampling. The well-specific information has been gathered from numerous historical and current sources cited in each section, but two particularly valuable resources merit special mention. These are the USGS NNSS website (http://nevada.usgs.gov/doe_nv/ntsarea5.cfm) and the UGTA Field Operations website (https://ugta.nv.doe.gov/sites/Field%20Operations/default.aspx). 2 Land surface elevation and measuring point for water level measurements in Frenchman Flat were a focus during CAU investigations (see Appendix B, Attachment 1 in Navarro-Intera, 2014). Both websites listed above provide information on the accepted datum for each well. A summary is found on the home page for the well on the USGS website. Additional information is available through a link in the “Available Data” section to an “MP diagram” with a photo annotated with the datum information. On the UGTA Field Operations well page, the same information is in the “Wellhead Diagram” link. Well RNM-2s does not have an annotated photo at this time. All of the CAU 98 monitoring wells are located within Area 5 of Frenchman Flat, with the exception of ER-11-2 in Area 11 (Figure 1). The wells are clustered in two areas: the northern area (Figure 2) and the central area (Figure 3). Each well is discussed below in geographic order from north to south as follows: ER-11-2, ER-5-3 shallow piezometer, ER-5-3-2, ER-5-5, RNM-1, RNM-2s, and UE-5n.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hershey, Ronald; Cablk, Mary; LeFebre, Karen
2013-08-01
Atmospheric tests and other experiments with nuclear materials were conducted on the Frenchman Flat playa at the Nevada National Security Site, Nye County, Nevada; residual radionuclides are known to exist in Frenchman Flat playa soils. Although the playa is typically dry, extended periods of winter precipitation or large single-event rainstorms can inundate the playa. When Frenchman Flat playa is inundated, residual radionuclides on the typically dry playa surface may become submerged, allowing water-soil interactions that could provide a mechanism for transport of radionuclides away from known areas of contamination. The potential for radionuclide transport by occasional inundation of the Frenchmanmore » Flat playa was examined using geographic information systems and satellite imagery to delineate the timing and areal extent of inundation; collecting water samples during inundation and analyzing them for chemical and isotopic content; characterizing suspended/precipitated materials and archived soil samples; modeling water-soil geochemical reactions; and modeling the mobility of select radionuclides under aqueous conditions. The physical transport of radionuclides by water was not evaluated in this study. Frenchman Flat playa was inundated with precipitation during two consecutive winters in 2009-2010 and 2010-2011. Inundation allowed for collection of multiple water samples through time as the areal extent of inundation changed and ultimately receded. During these two winters, precipitation records from a weather station in Frenchman Flat (Well 5b) provided information that was used in combination with geographic information systems, Landsat imagery, and image processing techniques to identify and quantify the areal extent of inundation. After inundation, water on the playa disappeared quickly, for example, between January 25, 2011 and February 10, 2011, a period of 16 days, 92 percent of the areal extent of inundation receded (2,062,800 m2). Water sampling provided valuable information about chemical processes occurring during inundation as the water disappeared. Important observations from water-chemistry analyses included: 1) total dissolved solids (TDS) and chloride ion (Cl-) concentrations were very low (TDS: < 200 mg/L and Cl-: < 3.0 mg/L, respectively) for all water samples regardless of time or areal extent; 2) all dissolved constituents were at concentrations well below what might be expected for evaporating shallow surface waters on a playa, even when 98 to 99 percent of the water had disappeared; 3) the amount of evaporation for the last water samples collected at the end of inundation, estimated with the stable isotopic ratios δ2H or δ18O, was approximately 60 percent; and 4) water samples analyzed by gamma spectroscopy did not show any man-made radioactivity; however, the short scanning time (24 hours) and relative chemical diluteness of the water samples (TDS ranged between 39 and 190 mg/L) may have contributed to none being detected. Additionally, any low-energy beta emitting radionuclides would not have been detected by gamma spectroscopy. From these observations, it was apparent that a significant portion of water on the playa did not evaporate, but rather infiltrated into the subsurface (approximately 40 percent). Consistent with this water chemistry-based conclusion is particle-size analysis of two archived Frenchman Flat playa soils samples, which showed low clay content in the near surface soil that also suggested infiltration. Infiltration of water from the playa during inundation into the subsurface does not necessarily imply that groundwater recharge is occurring, but it does provide a mechanism for moving residual radionuclides downward into the subsurface of Frenchman Flat playa. Water-mineral geochemical reactions were modeled so that changes in the water chemistry could be identified and the extent of reactions quantified. Geochemical modeling showed that evaporation; equilibrium with atmospheric carbon dioxide and calcite; dissolution of sodium chloride, gypsum, and composite volcanic glass; and precipitation of composite clay and quartz represented changes in water as it disappeared from the playa. This modeling provided an understanding of the water-soil geochemical environment, which was then used to evaluate the potential mobility of residual radionuclides into the playa soils by water. Because there is no information on the chemical forms of anthropogenic radionuclides in Frenchman Flat playa soil, it was assumed that soil radionuclides go into solution when the playa is inundated. In mobility modeling, a select group of radionuclides were allowed to sorb onto, or exchange with, playa soil minerals to evaluate the likelihood that the radionuclides would be removed from water during playa inundation. Radionuclide mobility modeling suggested that there would be minimal sorption or exchange of several important radionuclides (uranium, cesium, and technetium) with playa minerals such that they may be mobile in water when the playa is inundated and could infiltrate into the subsurface. Mobility modeling also showed that plutonium may be much less mobile because of sorption onto calcite, but the amount of reactive surface area of playa soil calcite is highly uncertain. Plutonium is also known to sorb onto colloidal particles suspended in water, suspended colloidal particles will move with the water, providing a mechanism to redistribute plutonium when Frenchman Flat playa is inundated. Water chemistry, stable isotopes, and geochemical modeling showed that residual radionuclides in Frenchman Flat playa soils could be mobilized in water when the playa is inundated with precipitation. Also, there is potential for these radionuclides to infiltrate into the subsurface with water. As a result of the information obtained both during this study and the conclusions drawn from it, additional data collection, investigation, and modeling are recommended. Specifically: sampling the playa soil to search for evidence of surface-water infiltration and the presence of radionuclides; developing a preliminary unsaturated flow and transport model to guide soil sampling; characterizing the chemical forms of radionuclides on the playa surface and any radionuclides that might have migrated into the subsurface; and, refining the unsaturated flow and transport model with data obtained from sampling and analysis of soil samples to guide any future sampling, development of remediation strategies, and defining risk-based boundaries for Frenchman Flat playa.« less
Jones, B.F.
1982-01-01
The mineralogy of matrix fines in alluvium from borehole Ullg, expl. 1, north of Frenchman Flat, Nevada Test Site, has been examined for evidence of past variations in water table elevation. Although greater abundance of zeolite and slightly more expanded basal spacings in smectite clays suggest effects of increased hydration of material up to 50 m above the present water table, these differences might also be related to provenance of environment of deposition. The relative uniformity of clay hydration properties in the 50 meters above the current water table suggest long-term stability near the present level. (USGS)
Completion Report for Model Evaluation Well ER-5-5: Corrective Action Unit 98: Frenchman Flat
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Underground Test Area and Boreholes Programs and Operations
2013-01-18
Model Evaluation Well ER-5-5 was drilled for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office in support of Nevada Environmental Management Operations at the Nevada National Security Site (formerly known as the Nevada Test Site). The well was drilled in July and August 2012 as part of a model evaluation well program in the Frenchman Flat area of Nye County, Nevada. The primary purpose of the well was to provide detailed geologic, hydrogeologic, chemical, and radiological data that can be used to test and build confidence in the applicability of the Frenchman Flat Corrective Action Unitmore » flow and transport models for their intended purpose. In particular, this well was designed to obtain data to evaluate the uncertainty in model forecasts of contaminant migration from the upgradient underground nuclear test MILK SHAKE, conducted in Emplacement Hole U-5k in 1968, which were considered to be uncertain due to the unknown extent of a basalt lava-flow aquifer present in this area. Well ER-5-5 is expected to provide information to refine the Phase II Frenchman Flat hydrostratigraphic framework model, if necessary, as well as to support future groundwater flow and transport modeling. The 31.1-centimeter (cm) diameter hole was drilled to a total depth of 331.3 meters (m). The completion string, set at the depth of 317.2 m, consists of 16.8-cm stainless-steel casing hanging from 19.4-cm carbon-steel casing. The 16.8-cm stainless-steel casing has one slotted interval open to the basalt lava-flow aquifer and limited intervals of the overlying and underlying alluvial aquifer. A piezometer string was also installed in the annulus between the completion string and the borehole wall. The piezometer is composed of 7.3-cm stainless-steel tubing suspended from 6.0-cm carbon-steel tubing. The piezometer string was landed at 319.2 m, to monitor the basalt lava-flow aquifer. Data collected during and shortly after hole construction include composite drill cuttings samples collected every 3.0 m, various geophysical logs, preliminary water quality measurements, and water-level measurements. The well penetrated 331.3 m of Quaternary–Tertiary alluvium, including an intercalated layer of saturated basalt lava rubble. No well development or hydrologic testing was conducted in this well immediately after completion; however, a preliminary water level was measured in the piezometer string at the depth of 283.4 m on September 25, 2012. No tritium above the minimum detection limit of the field instruments was detected in this hole. Future well development, sampling, and hydrologic testing planned for this well will provide more accurate hydrologic information for this site. The stratigraphy, general lithology, and water level were as expected, though the expected basalt lava-flow aquifer is basalt rubble and not the dense, fractured lava as modeled. The lack of tritium transport is likely due to the difference in hydraulic properties of the basalt lava-flow rubble encountered in the well, compared to those of the fractured aquifer used in the flow and transport models.« less
Cole, James C.; Harris, Anita G.; Wahl, Ronald R.
1997-01-01
This map displays interpreted structural and stratigraphic relations among the Paleozoic and older rocks of the Nevada Test Site region beneath the Miocene volcanic rocks and younger alluvium in the Yucca Flat and northern Frenchman Flat basins. These interpretations are based on a comprehensive examination and review of data for more than 77 drillholes that penetrated part of the pre-Tertiary basement beneath these post-middle Miocene structural basins. Biostratigraphic data from conodont fossils were newly obtained for 31 of these holes, and a thorough review of all prior microfossil paleontologic data is incorporated in the analysis. Subsurface relationships are interpreted in light of a revised regional geologic framework synthesized from detailed geologic mapping in the ranges surrounding Yucca Flat, from comprehensive stratigraphic studies in the region, and from additional detailed field studies on and around the Nevada Test Site.All available data indicate the subsurface geology of Yucca Flat is considerably more complicated than previous interpretations have suggested. The western part of the basin, in particular, is underlain by relics of the eastward-vergent Belted Range thrust system that are folded back toward the west and thrust by local, west-vergent contractional structures of the CP thrust system. Field evidence from the ranges surrounding the north end of Yucca Flat indicate that two significant strike-slip faults track southward beneath the post-middle Miocene basin fill, but their subsurface traces cannot be closely defined from the available evidence. In contrast, the eastern part of the Yucca Flat basin is interpreted to be underlain by a fairly simple north-trending, broad syncline in the pre-Tertiary units. Far fewer data are available for the northern Frenchman Flat basin, but regional analysis indicates the pre- Tertiary structure there should also be relatively simple and not affected by thrusting.This new interpretation has implications for ground water flow through pre-Tertiary rocks beneath the Yucca Flat and northern Frenchman Flat areas, and has consequences for ground water modeling and model validation. Our data indicate that the Mississippian Chainman Shale is not a laterally extensive confining unit in the western part of the basin because it is folded back onto itself by the convergent structures of the Belted Range and CP thrust systems. Early and Middle Paleozoic limestone and dolomite are present beneath most of both basins and, regardless of structural complications, are interpreted to form a laterally continuous and extensive carbonate aquifer. Structural culmination that marks the French Peak accommodation zone along the topographic divide between the two basins provides a lateral pathway through highly fractured rock between the volcanic aquifers of Yucca Flat and the regional carbonate aquifer. This pathway may accelerate the migration of ground-water contaminants introduced by underground nuclear testing toward discharge areas beyond the Nevada Test Site boundaries. Predictive three-dimensional models of hydrostratigraphic units and ground-water flow in the pre-Tertiary rocks of subsurface Yucca Flat are likely to be unrealistic due to the extreme structural complexities. The interpretation of hydrologic and geochemical data obtained from monitoring wells will be difficult to extrapolate through the flow system until more is known about the continuity of hydrostratigraphic units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick Matthews and Dawn Peterson
2011-09-01
Corrective Action Unit 106 comprises four corrective action sites (CASs): (1) 05-20-02, Evaporation Pond; (2) 05-23-05, Atmospheric Test Site - Able; (3) 05-45-04, 306 GZ Rad Contaminated Area; (4) 05-45-05, 307 GZ Rad Contaminated Area. The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 106 based on the implementation of corrective actions. The corrective action of clean closure was implemented at CASs 05-45-04 and 05-45-05, while no corrective action was necessary at CASs 05-20-02 and 05-23-05. Corrective action investigation (CAI) activities were performed from October 20,more » 2010, through June 1, 2011, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 106: Areas 5, 11 Frenchman Flat Atmospheric Sites. The approach for the CAI was divided into two facets: investigation of the primary release of radionuclides, and investigation of other releases (mechanical displacement and chemical releases). The purpose of the CAI was to fulfill data needs as defined during the data quality objective (DQO) process. The CAU 106 dataset of investigation results was evaluated based on a data quality assessment. This assessment demonstrated the dataset is complete and acceptable for use in fulfilling the DQO data needs. Investigation results were evaluated against final action levels (FALs) established in this document. A radiological dose FAL of 25 millirem per year was established based on the Industrial Area exposure scenario (2,250 hours of annual exposure). The only radiological dose exceeding the FAL was at CAS 05-45-05 and was associated with potential source material (PSM). It is also assumed that additional PSM in the form of depleted uranium (DU) and DU-contaminated debris at CASs 05-45-04 and 05-45-05 exceed the FAL. Therefore, corrective actions were undertaken at these CASs that consisted of removing PSM and collecting verification samples. Results of verification samples show that remaining soil does not contain contamination exceeding the FALs. Therefore, the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO) provides the following recommendations: (1) No further corrective actions are necessary for CAU 106. (2) A Notice of Completion to NNSA/NSO is requested from the Nevada Division of Environmental Protection for closure of CAU 106. (3) Corrective Action Unit 106 should be moved from Appendix III to Appendix IV of the FFACO.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnham, Irene
Corrective Action Unit (CAU) 98: Frenchman Flat on the Nevada National Security Site was the location of 10 underground nuclear tests. CAU 98 underwent a series of investigations and actions in accordance with the Federal Facility Agreement and Consent Order to assess contamination of groundwater by radionuclides from the tests. A Closure Report completed that process in 2016 and called for long-term monitoring, use restrictions (URs), and institutional controls to protect the public and environment from potential exposure to contaminated groundwater. Three types of monitoring are performed for CAU 98: water quality, water level, and institutional control. These are evaluatedmore » to determine whether the UR boundaries remain protective of human health and the environment, and to ensure that the regulatory boundary objectives are being met. Additionally, monitoring data are used to evaluate consistency with the groundwater flow and contaminant transport models because the contaminant boundaries (CBs) calculated with the models are the primary basis of the UR boundaries. In summary, the monitoring results from 2016 indicate the regulatory controls on the closure of CAU 98 remain effective in protection of human health and the environment. Recommendations resulting from this first year of monitoring activities include formally incorporating wells UE-5 PW-1, UE-5 PW-2, and UE-5 PW-3 into the groundwater-level monitoring network given their strategic location in the basin; and early development of a basis for trigger levels for the groundwater-level monitoring given the observed trends. Additionally, it is recommended to improve the Real Estate/Operations Permit process for capturing information important for evaluating the impact of activities on groundwater resources, and to shift the reporting requirement for this annual report from the second quarter of the federal fiscal year (end of March) to the second quarter of the calendar year (end of June).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
Corrective Action Unit (CAU) 576 is located in Areas 2, 3, 5, 8, and 9 of the Nevada National Security Site, which is approximately 65 miles northwest of Las Vegas, Nevada. CAU 576 is a grouping of sites where there has been a suspected release of contamination associated with nuclear testing. This document describes the planned investigation of CAU 576, which comprises the following corrective action sites (CASs): 00-99-01, Potential Source Material; 02-99-12, U-2af (Kennebec) Surface Rad-Chem Piping; 03-99-20, Area 3 Subsurface Rad-Chem Piping; 05-19-04, Frenchman Flat Rad Waste Dump ; 09-99-08, U-9x (Allegheny) Subsurface Rad-Chem Piping; 09-99-09, U-9its u24more » (Avens-Alkermes) Surface Contaminated Flex Line These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document (CADD).« less
Nevada National Security Site Integrated Groundwater Sampling Plan, Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilborn, Bill R.; Boehlecke, Robert F.
The purpose is to provide a comprehensive, integrated approach for collecting and analyzing groundwater samples to meet the needs and objectives of the DOE/EM Nevada Program’s UGTA Activity. Implementation of this Plan will provide high-quality data required by the UGTA Activity for ensuring public protection in an efficient and cost-effective manner. The Plan is designed to ensure compliance with the UGTA Quality Assurance Plan (QAP) (NNSA/NFO, 2015); Federal Facility Agreement and Consent Order (FFACO) (1996, as amended); and DOE Order 458.1, Radiation Protection of the Public and the Environment (DOE, 2013). The Plan’s scope comprises sample collection and analysis requirementsmore » relevant to assessing both the extent of groundwater contamination from underground nuclear testing and impact of testing on water quality in downgradient communities. This Plan identifies locations to be sampled by CAU and location type, sampling frequencies, sample collection methodologies, and the constituents to be analyzed. In addition, the Plan defines data collection criteria such as well purging, detection levels, and accuracy requirements/recommendations; identifies reporting and data management requirements; and provides a process to ensure coordination between NNSS groundwater sampling programs for sampling analytes of interest to UGTA. Information used in the Plan development—including the rationale for selection of wells, sampling frequency, and the analytical suite—is discussed under separate cover (N-I, 2014) and is not reproduced herein. This Plan does not address compliance for those wells involved in a permitted activity. Sampling and analysis requirements associated with these wells are described in their respective permits and are discussed in NNSS environmental reports (see Section 5.2). In addition, sampling for UGTA CAUs that are in the Closure Report (CR) stage are not included in this Plan. Sampling requirements for these CAUs are described in the CR. Frenchman Flat is currently the only UGTA CAU in the CR stage. Sampling requirements for this CAU are described in Underground Test Area (UGTA) Closure Report for Corrective Action Unit 98: Frenchman Flat Nevada National Security Site, Nevada (NNSA/NFO, 2016).« less
Winograd, Isaac J.
2001-01-01
In their response to the comments by Thomas [1999], Davisson et al. [1999a] dismiss a large set of potentiometric measurements pertinent to an understanding of the hydrogeology of Yucca and Frenchman Flats in southcentral Nevada. This commentary is submitted to demonstrate, first, that their dismissal of this data set is unfounded and, second, that these potentiometric data call into question the central thesis of the original paper by Davisson et al. [1999b].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramspott, L.D.; McArthur, R.D.
1977-02-18
Exploratory hole Ue5n was drilled to a depth of 514 m in central Frenchmam Flat, Nevada Test Site, as part of a program sponsored by the Nuclear Monitoring Office (NMO) of the Advanced Research Projects Agency (ARPA) to determine the geologic and geophysical parameters of selected locations with anomalous seismic signals. The specific goal of drilling Ue5n was to provide the site characteristics for emplacement sites U5b and U5e. We present here data on samples, geophysical logs, lithology and stratigraphy, and depth to the water table. From an analysis of the measurements of the physical properties, a set of recommendedmore » values is given.« less
Winograd, Isaac Judah; Doty, Gene C.
1980-01-01
Knowledge of the magnitude of water-table rise during Pleistocene pluvial climates, and of the resultant shortening of groundwater flow path and reduction in unsaturated zone thickness, is mandatory for a technical evaluation of the Nevada Test Site (NTS) or other arid zone sites as repositories for high-level or transuranic radioactive wastes. The distribution of calcitic veins filling fractures in alluvium, and of tufa deposits between the Ash Meadows spring discharge area and the Nevada Test Site indicates that discharge from the regional Paleozoic carbonate aquifer during the Late( ) Pleistocene pluvial periods may have occurred at an altitude about 50 meters higher than at present and 14 kilometers northeast of Ash Meadows. Use of the underflow equation (relating discharge to transmissivity, aquifer width, and hydraulic gradient), and various assumptions regarding pluvial recharge, transmissivity, and altitude of groundwater base level, suggest possible rises in potentiometric level in the carbonate aquifer of about -90 meters beneath central Frenchman Flat. During Wisconsin time the rise probably did not exceed 30 meters. Water-level rises beneath Frenchman Flat during future pluvials are unlikely to exceed 30 meters and might even be 10 meters lower than modern levels. Neither the cited rise in potentiometric level in the regional carbonate aquifer, nor the shortened flow path during the Late( ) Pleistocene preclude utilization of the NTS as a repository for high-level or transuranic-element radioactive wastes provided other requisite conditions are met as this site. Deep water tables, attendant thick (up to several hundred meter) unsaturated zones, and long groundwater flow paths characterized the region during the Wisconsin Stage and probably throughout the Pleistocene Epoch and are likely to so characterize it during future glacial periods. (USGS)
Geology Report: Area 3 Radioactive Waste Management Site DOE/Nevada Test Site, Nye County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Environmental Management
2006-07-01
Surficial geologic studies near the Area 3 Radioactive Waste Management Site (RWMS) were conducted as part of a site characterization program. Studies included evaluation of the potential for future volcanism and Area 3 fault activity that could impact waste disposal operations at the Area 3 RWMS. Future volcanic activity could lead to disruption of the Area 3 RWMS. Local and regional studies of volcanic risk indicate that major changes in regional volcanic activity within the next 1,000 years are not likely. Mapped basalts of Paiute Ridge, Nye Canyon, and nearby Scarp Canyon are Miocene in age. There is a lackmore » of evidence for post-Miocene volcanism in the subsurface of Yucca Flat, and the hazard of basaltic volcanism at the Area 3 RWMS, within the 1,000-year regulatory period, is very low and not a forseeable future event. Studies included a literature review and data analysis to evaluate unclassified published and unpublished information regarding the Area 3 and East Branch Area 3 faults mapped in Area 3 and southern Area 7. Two trenches were excavated along the Area 3 fault to search for evidence of near-surface movement prior to nuclear testing. Allostratigraphic units and fractures were mapped in Trenches ST02 and ST03. The Area 3 fault is a plane of weakness that has undergone strain resulting from stress imposed by natural events and underground nuclear testing. No major vertical displacement on the Area 3 fault since the Early Holocene, and probably since the Middle Pleistocene, can be demonstrated. The lack of major displacement within this time frame and minimal vertical extent of minor fractures suggest that waste disposal operations at the Area 3 RWMS will not be impacted substantially by the Area 3 fault, within the regulatory compliance period. A geomorphic surface map of Yucca Flat utilizes the recent geomorphology and soil characterization work done in adjacent northern Frenchman Flat. The approach taken was to adopt the map unit boundaries (line work) of Swadley and Hoover (1990) and re-label these with map unit designations like those in northern Frenchman Flat (Huckins-Gang et al, 1995a,b,c; Snyder et al, 1995a,b,c,d).« less
Status of the flora and fauna on the Nevada Test Site, 1989--1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, R.B.
1994-03-01
This volume includes six reports of monitoring work to determine the status of and trends in flora and fauna populations on the Nevada Test Site (NTS) from 1989 through 1991. The Nevada Operations Office of the US Department of Energy supported monitoring under its Basic Environmental Compliance and Monitoring Program (BECAMP) since 1987. Under this program several undisturbed baseline plots, and numerous plots in disturbed areas, are sampled on annual or three-year cycles. Perennial plant populations, ephemeral plants, small mammals, reptiles, birds, and large mammals were monitored. Monitoring results are reported for five baseline sites, one from each major landformmore » on the NTS (Jackass Flats, Frenchman Flat, Yucca Flat, Pahute Mesa, and Rainier Mesa), and for areas cleared of vegetation by fires, atmospheric nuclear weapons tests, construction, and gophers. Roadside flora and fauna were studied at two locations, and several historical study plots around the NTS were recensused to determine vegetation changes over long time spans. Three subsidence craters resulting from below-ground nuclear weapons tests were also studied. A major influence on plants and animals during the report period was a severe drought during 1989 and 1990, followed by more moderate drought in 1991.« less
Grasso, Dennis N.
2003-01-01
Surface effects maps were produced for 72 of 89 underground detonations conducted at the Frenchman Flat, Rainier Mesa and Aqueduct Mesa, Climax Stock, Shoshone Mountain, Buckboard Mesa, and Dome Mountain testing areas of the Nevada Test Site between August 10, 1957 (Saturn detonation, Area 12) and September 18, 1992 (Hunters Trophy detonation, Area 12). The ?Other Areas? Surface Effects Map Database, which was used to construct the maps shown in this report, contains digital reproductions of these original maps. The database is provided in both ArcGIS (v. 8.2) geodatabase format and ArcView (v. 3.2) shapefile format. This database contains sinks, cracks, faults, and other surface effects having a combined (cumulative) length of 136.38 km (84.74 mi). In GIS digital format, the user can view all surface effects maps simultaneously, select and view the surface effects of one or more sites of interest, or view specific surface effects by area or site. Three map layers comprise the database. They are: (1) the surface effects maps layer (oase_n27f), (2) the bar symbols layer (oase_bar_n27f), and (3) the ball symbols layer (oase_ball_n27f). Additionally, an annotation layer, named 'Ball_and_Bar_Labels,' and a polygon features layer, named 'Area12_features_poly_n27f,' are contained in the geodatabase version of the database. The annotation layer automatically labels all 295 ball-and-bar symbols shown on these maps. The polygon features layer displays areas of ground disturbances, such as rock spall and disturbed ground caused by the detonations. Shapefile versions of the polygon features layer in Nevada State Plane and Universal Transverse Mercator projections, named 'area12_features_poly_n27f.shp' and 'area12_features_poly_u83m.shp,' are also provided in the archive.
Magnetotelluric Data, Rainier Mesa/Shoshone Mountain, Nevada Test Site, Nevada.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackie M. Williams; Jay A. Sampson; Brian D. Rodriguez
2006-11-03
The United States Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office (NSO) are addressing ground-water contamination resulting from historical underground nuclear testing through the Environmental Management (EM) program and, in particular, the Underground Test Area (UGTA) project. From 1951 to 1992, 828 underground nuclear tests were conducted at the Nevada Test Site northwest of Las Vegas. Most of these tests were conducted hundreds of feet above the ground-water table; however, more than 200 of the tests were near or within the water table. This underground testing was limited to specific areas ofmore » the Nevada Test Site, including Pahute Mesa, Rainier Mesa/Shoshone Mountain, Frenchman Flat, and Yucca Flat. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology, and its effects on ground-water flow. Ground-water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Rainier Mesa/Shoshone Mountain Corrective Action Unit (Bechtel Nevada, 2006). During 2005, the U.S. Geological Survey (USGS), in cooperation with the DOE and NNSA-NSO, collected and processed data from twenty-six magnetotelluric (MT) and audio-magnetotelluric (AMT) sites at the Nevada Test Site. The 2005 data stations were located on and near Rainier Mesa and Shoshone Mountain to assist in characterizing the pre-Tertiary geology in those areas. These new stations extend the area of the hydrogeologic study previously conducted in Yucca Flat. This work will help refine what is known about the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU – late Devonian to Mississippian-age siliciclastic rocks assigned to the Eleana Formation and Chainman Shale) from the Yucca Flat area and west towards Shoshone Mountain, to Buckboard Mesa in the south, and onto Rainier Mesa in the north. Subsequent interpretation will include a three-dimensional (3-D) character analysis and a two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for the twenty-six stations shown in figure 1. No interpretation of the data is included here.« less
Why Are We So Punitive? Some Observations on Recent Incarceration Trends
ERIC Educational Resources Information Center
Shelden, Randall G.
2004-01-01
In the early 19th century, the famous Frenchman Alexis de Tocqueville spent a considerable amount of time touring America and writing about what he saw. He is, of course, most famous for his book Democracy in America (1961), but he also wrote, along with a fellow Frenchman Gustav de Beaumont, a book called On the Penitentiary System in the United…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick Matthews
2011-07-01
Corrective Action Unit 106 comprises the four corrective action sites (CASs) listed below: • 05-20-02, Evaporation Pond • 05-23-05, Atmospheric Test Site - Able • 05-45-04, 306 GZ Rad Contaminated Area • 05-45-05, 307 GZ Rad Contaminated Area These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viablemore » CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on January 19, 2010, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 106. The presence and nature of contamination at CAU 106 will be evaluated based on information collected from a field investigation. The CAU includes land areas impacted by the release of radionuclides from groundwater pumping during the Radionuclide Migration study program (CAS 05-20-02), a weapons-related airdrop test (CAS 05-23-05), and unknown support activities at two sites (CAS 05-45-04 and CAS 05-45-05). The presence and nature of contamination from surface-deposited radiological contamination from CAS 05-23-05, Atmospheric Test Site - Able, and other types of releases (such as migration and excavation as well as any potential releases discovered during the investigation) from the remaining three CASs will be evaluated using soil samples collected from the locations most likely containing contamination, if present. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the corrective action investigation for CAU 106 includes the following activities: • Conduct radiological surveys. • Collect and submit environmental samples for laboratory analysis to determine internal dose rates and the presence of contaminants of concern. • If contaminants of concern are present, collect additional samples to define the extent of the contamination and determine the area where the total effective dose at the site exceeds final action levels (i.e., corrective action boundary). • Collect samples of investigation-derived waste, as needed, for waste management purposes.« less
Evapotranspiration Cover for the 92-Acre Area Retired Mixed Waste Pits:Interim CQA Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Delphi Groupe, Inc., and J. A. Cesare and Associates, Inc.
This Interim Construction Quality Assurance (CQA) Report is for the 92-Acre Evapotranspiration Cover, Area 5 Waste Management Division (WMD) Retired Mixed Waste Pits, Nevada National Security Site, Nevada for the period of January 20, 2011 to May 12, 2011. This Interim Construction Quality Assurance (CQA) Report is for the 92-Acre Evapotranspiration Cover, Area 5 Waste Management Division (WMD) Retired Mixed Waste Pits, Nevada National Security Site, Nevada for the period of January 20, 2011 to May 12, 2011. Construction was approved by the Nevada Division of Environmental Protection (NDEP) under the Approval of Corrective Action Decision Document/Corrective Action Plan (CADD/CAP)more » for Corrective Action Unit (CAU) 111: Area 5 WMD Retired Mixed Waste Pits, Nevada National Security Site, Nevada, on January 6, 2011, pursuant to Subpart XII.8a of the Federal Facility Agreement and Consent Order. The project is located in Area 5 of the Radioactive Waste Management Complex (RWMC) at the Nevada National Security Site (NNSS), formerly known as the Nevada Test Site, located in southern Nevada, approximately 65 miles northwest of Las Vegas, Nevada, in Nye County. The project site, in Area 5, is located in a topographically closed basin approximately 14 additional miles north of Mercury Nevada, in the north-central part of Frenchman Flat. The Area 5 RWMS uses engineered shallow-land burial cells to dispose of packaged waste. The 92-Acre Area encompasses the southern portion of the Area 5 RWMS, which has been designated for the first final closure operations. This area contains 13 Greater Confinement Disposal (GCD) boreholes, 16 narrow trenches, and 9 broader pits. With the exception of two active pits (P03 and P06), all trenches and pits in the 92-Acre Area had operational covers approximately 2.4 meters thick, at a minimum, in most areas when this project began. The units within the 92-Acre Area are grouped into the following six informal categories based on physical location, waste types and regulatory requirements: (1) Pit 3 Mixed Waste Disposal Unit (MWDU); (2) Corrective Action Unit (CAU) 111; (3) CAU 207; (4) Low-level waste disposal units; (5) Asbestiform low-level waste disposal units; and (6) One transuranic (TRU) waste trench.« less
Deep Resistivity Structure of Mid Valley, Nevada Test Site, Nevada
Wallin, Erin L.; Rodriguez, Brian D.; Williams, Jackie M.
2009-01-01
The U.S. Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office (NSO) are addressing ground-water contamination resulting from historical underground nuclear testing through the Environmental Management (EM) program and, in particular, the Underground Test Area (UGTA) project. From 1951 to 1992, 828 underground nuclear tests were conducted at the Nevada Test Site northwest of Las Vegas (DOE UGTA, 2003). Most of these tests were conducted hundreds of feet above the ground-water table; however, more than 200 of the tests were near, or within, the water table. This underground testing was limited to specific areas of the Nevada Test Site including Pahute Mesa, Rainier Mesa/Shoshone Mountain (RM-SM), Frenchman Flat, and Yucca Flat. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Rainier Mesa/Shoshone Mountain (RM-SM) Corrective Action Unit (CAU) (National Security Technologies, 2007). During 2003, the U.S. Geological Survey (USGS), in cooperation with the DOE and NNSA-NSO collected and processed data at the Nevada Test Site in and near Yucca Flat (YF) to help define the character, thickness, and lateral extent of the pre-Tertiary confining units. We collected 51 magnetotelluric (MT) and audio-magnetotelluric (AMT) stations for that research (Williams and others, 2005a, 2005b, 2005c, 2005d, 2005e, and 2005f). In early 2005 we extended that research with 26 additional MT data stations (Williams and others, 2006) located on and near Rainier Mesa and Shoshone Mountain (RM-SM). The new stations extended the area of the hydrogeologic study previously conducted in Yucca Flat, further refining what is known about the pre-Tertiary confining units. In particular, a major goal was to define the extent of the upper clastic confining unit (UCCU). The UCCU is composed of late Devonian to Mississippian siliciclastic rocks assigned to the Eleana Formation and Chainman Shale (National Security Technologies, 2007). The UCCU underlies the Yucca Flat area and extends southwestward toward Shoshone Mountain, westward toward Buckboard Mesa, and northwestward toward Rainier Mesa. Late in 2005 we collected data at an additional 14 MT stations in Mid Valley, CP Hills, and northern Yucca Flat. That work was done to better determine the extent and thickness of the UCCU near the boundary between the southeastern RM-SM CAU and the southwestern YF CAU, and also in the northern YF CAU. The MT data have been released in a separate U.S. Geological Survey report (Williams and others, 2007). The Nevada Test Site magnetotelluric data interpretation presented in this report includes the results of detailed two-dimensional (2-D) resistivity modeling for each profile and inferences on the three-dimensional (3-D) character of the geology within the region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-19
...-Rolled Carbon-Quality Steel Products From Brazil: Correction to Notice of Antidumping Duty Order AGENCY... certain hot-rolled flat-rolled carbon-quality steel products from Brazil. See Antidumping Duty Order: Certain Hot-Rolled Flat-Rolled Carbon-Quality Steel Products From Brazil, 67 FR 11093 (March 12, 2002...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Robert; Marutzky, Sam
2000-09-01
This Corrective Action Investigation Plan contains the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) approach to collect the data necessary to evaluate Corrective Action Alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 97 under the Federal Facility Agreement and Consent Order (FFACO). Corrective Action Unit 97, collectively known as the Yucca Flat/Climax Mine CAU, consists of 720 Corrective Action Sites (CASs). The Yucca Flat/Climax Mine CAU extends over several areas of the NTS and constitutes one of several areas used for underground nuclear testing in the past. The nuclear tests resulted in groundwater contamination in themore » vicinity as well as downgradient of the underground test areas. Based on site history, the Yucca Flat underground nuclear tests were conducted in alluvial, volcanic, and carbonate rocks; whereas, the Climax Mine tests were conducted in an igneous intrusion located in northern Yucca Flat. Particle-tracking simulations performed during the regional evaluation indicate that the local Climax Mine groundwater flow system merges into the much larger Yucca Flat groundwater flow systems during the 1,000-year time period of interest. Addressing these two areas jointly and simultaneously investigating them as a combined CAU has been determined the best way to proceed with corrective action investigation (CAI) activities. The purpose and scope of the CAI includes characterization activities and model development conducted in five major sequential steps designed to be consistent with FFACO Underground Test Area Project's strategy to predict the location of the contaminant boundary, develop and implement a corrective action, and close each CAU. The results of this field investigation will support a defensible evaluation of CAAs in the subsequent corrective action decision document.« less
Restoration of Lumbar Lordosis in Flat Back Deformity: Optimal Degree of Correction
Kim, Ki-Tack; Lee, Sang-Hun; Kim, Hyo-Jong; Kim, Jung-Youn; Lee, Jung-Hee
2015-01-01
Study Design A retrospective comparative study. Purpose To provide an ideal correction angle of lumbar lordosis (LL) in degenerative flat back deformity. Overview of Literature The degree of correction in degenerative flat back in relation to pelvic incidence (PI) remains controversial. Methods Forty-nine patients with flat back deformity who underwent corrective surgery were enrolled. Posterior-anterior-posterior sequential operation was performed. Mean age and mean follow-up period was 65.6 years and 24.2 months, respectively. We divided the patients into two groups based on immediate postoperative radiographs-optimal correction (OC) group (PI-9°≤LL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
2014-01-01
The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, T.A. Jr.
The primary objective of this report is to compare the results of delta surface interpolation with kriging on four large sets of radiological data sampled in the Frenchman Lake region at the Nevada Test Site. The results of kriging, described in Barnes, Giacomini, Reiman, and Elliott, are very similar to those using the delta surface interpolant. The other topic studied is in reducing the number of sample points and obtaining results similar to those using all of the data. The positive results here suggest that great savings of time and money can be made. Furthermore, the delta surface interpolant ismore » viewed as a contour map and as a three dimensional surface. These graphical representations help in the analysis of the large sets of radiological data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragsdale, H.L.; Rhoads, W.A.
1974-01-01
This report illustrates the feasibility of using temporally-delayed vegetation assays to determine radiation damage, by documenting the radiation damage resulting from the accidental venting of radioactive materials during Project Pinstripe, Frenchman's Flat, Nevada Test Site, in April, 1966. Evidence of desert shrub radiation damage was first observed and photographed, in April, 1968. Systematic study of the vegetation was initiated in October, 1970, and evidence of radiation damage documented over 72.9 hectares adjacent to the vent. Beta doses were estimated at 15--21 krads based on gamma exposure dose measurements. The minimum beta dose estimate was substantially greater than the theoretical lethalmore » dose for the shrub, Larrea divaricata. Radiation damage to the shrubs, Larrea divaricata, Ephedra funerea, and Atriplex confertifolia was expressed as differential bud mortality, partial death of shrub crowns with and without crown regrowth, and total shrub crown death without crown regrowth. Each of the shrub populations was statistically different from its control population with respect to the distribution of individuals among damage classes. Generally, damage patterns were similar to those observed at two previously-studied Plowshare events.« less
4. Light tower and keeper's house ,view west, southeast and ...
4. Light tower and keeper's house ,view west, southeast and northeast sides - Baker Island Light, Lightkeeper's House, Just east of Cranberry Isles, at entrance to Frenchman Bay, Bar Harbor, Hancock County, ME
1. Keeper's house and light tower, view northeast, northwest and ...
1. Keeper's house and light tower, view northeast, northwest and southwest sides - Baker Island Light, Lightkeeper's House, Just east of Cranberry Isles, at entrance to Frenchman Bay, Bar Harbor, Hancock County, ME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnham, Irene
This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigationmore » (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.« less
McCaw, Travis J; Micka, John A; Dewerd, Larry A
2011-10-01
Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used, however, application of a marker-dye correction can improve or degrade the dose uncertainty relative to the net OD method. The uniformity of EBT2 was found to be independent of the time postexposure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
Correcting Thermal Deformations in an Active Composite Reflector
NASA Technical Reports Server (NTRS)
Bradford, Samuel C.; Agnes, Gregory S.; Wilkie, William K.
2011-01-01
Large, high-precision composite reflectors for future space missions are costly to manufacture, and heavy. An active composite reflector capable of adjusting shape in situ to maintain required tolerances can be lighter and cheaper to manufacture. An active composite reflector testbed was developed that uses an array of piezoelectric composite actuators embedded in the back face sheet of a 0.8-m reflector panel. Each individually addressable actuator can be commanded from 500 to +1,500 V, and the flatness of the panel can be controlled to tolerances of 100 nm. Measuring the surface flatness at this resolution required the use of a speckle holography interferometer system in the Precision Environmental Test Enclosure (PETE) at JPL. The existing testbed combines the PETE for test environment stability, the speckle holography system for measuring out-of-plane deformations, the active panel including an array of individually addressable actuators, a FLIR thermal camera to measure thermal profiles across the reflector, and a heat source. Use of an array of flat piezoelectric actuators to correct thermal deformations is a promising new application for these actuators, as is the use of this actuator technology for surface flatness and wavefront control. An isogrid of these actuators is moving one step closer to a fully active face sheet, with the significant advantage of ease in manufacturing. No extensive rib structure or other actuation backing structure is required, as these actuators can be applied directly to an easy-to-manufacture flat surface. Any mission with a surface flatness requirement for a panel or reflector structure could adopt this actuator array concept to create lighter structures and enable improved performance on orbit. The thermal environment on orbit tends to include variations in temperature during shadowing or changes in angle. Because of this, a purely passive system is not an effective way to maintain flatness at the scale of microns over several meters. This technology is specifically referring to correcting thermal deformations of a large, flat structure to a specified tolerance. However, the underlying concept (an array of actuators on the back face of a panel for correcting the flatness of the front face) could be extended to many applications, including energy harvesting, changing the wavefront of an optical system, and correcting the flatness of an array of segmented deployable panels.
6. Light tower, detail of stairs leading from first landing ...
6. Light tower, detail of stairs leading from first landing to cupola, looking east - Baker Island Light, Lightkeeper's House, Just east of Cranberry Isles, at entrance to Frenchman Bay, Bar Harbor, Hancock County, ME
5. Light tower and corner of keeper's house, view northeast, ...
5. Light tower and corner of keeper's house, view northeast, northwest and southwest sides - Baker Island Light, Lightkeeper's House, Just east of Cranberry Isles, at entrance to Frenchman Bay, Bar Harbor, Hancock County, ME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
2013-09-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 105 comprises the following five corrective action sites (CASs): -02-23-04 Atmospheric Test Site - Whitney Closure In Place -02-23-05 Atmospheric Test Site T-2A Closure In Place -02-23-06 Atmospheric Test Site T-2B Clean Closure -02-23-08 Atmospheric Test Site T-2 Closure In Place -02-23-09 Atmospheric Test Site - Turk Closure In Place The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that nomore » further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.« less
Completion Report for Well ER-3-3 Corrective Action Unit 97: Yucca Flat/Climax Mine, Revision 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, Jeffrey; Rehfeldt, Ken
Well ER-3-3 was drilled for the U.S. Department of Energy, Nevada National Security Administration Nevada Field Office in support of the Underground Test Area (UGTA) Activity. The well was drilled and completed from February 21 to March 15, 2016, as part of the Corrective Action Investigation Plan (CAIP) for Yucca Flat/Climax Mine Corrective Action Unit (CAU) 97. The primary purpose of the well was to collect hydrogeologic data to assist in validating concepts of the flow system within the Yucca Flat/Climax Mine CAU, and to test for potential radionuclides in groundwater from the WAGTAIL (U3an) underground test.
Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.
Bose, N; Lien, J
1989-07-22
Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.
System for photometric calibration of optoelectronic imaging devices especially streak cameras
Boni, Robert; Jaanimagi, Paul
2003-11-04
A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
2013-11-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 570: Area 9 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. This complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. The purpose of the CADD/CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed.
Removal of ring artifacts in microtomography by characterization of scintillator variations.
Vågberg, William; Larsson, Jakob C; Hertz, Hans M
2017-09-18
Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.
SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haywood, J
Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less
NASA Astrophysics Data System (ADS)
Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.
2017-11-01
The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.
33 CFR 110.130 - Bar Harbor, Maine.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Bar Harbor, Maine. 110.130... ANCHORAGE REGULATIONS Anchorage Grounds § 110.130 Bar Harbor, Maine. (a) Anchorage grounds. (1) Anchorage “A” is that portion of Frenchman Bay, Bar Harbor, ME enclosed by a rhumb line connecting the following...
Projective flatness in the quantisation of bosons and fermions
NASA Astrophysics Data System (ADS)
Wu, Siye
2015-07-01
We compare the quantisation of linear systems of bosons and fermions. We recall the appearance of projectively flat connection and results on parallel transport in the quantisation of bosons. We then discuss pre-quantisation and quantisation of fermions using the calculus of fermionic variables. We define a natural connection on the bundle of Hilbert spaces and show that it is projectively flat. This identifies, up to a phase, equivalent spinor representations constructed by various polarisations. We introduce the concept of metaplectic correction for fermions and show that the bundle of corrected Hilbert spaces is naturally flat. We then show that the parallel transport in the bundle of Hilbert spaces along a geodesic is a rescaled projection provided that the geodesic lies within the complement of a cut locus. Finally, we study the bundle of Hilbert spaces when there is a symmetry.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman
2014-12-01
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Rosenfield, J; Dong, X
2016-06-15
Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less
NASA Astrophysics Data System (ADS)
Morbec, Juliana M.; Kratzer, Peter
2017-01-01
Using first-principles calculations based on density-functional theory (DFT), we investigated the effects of the van der Waals (vdW) interactions on the structural and electronic properties of anthracene and pentacene adsorbed on the Ag(111) surface. We found that the inclusion of vdW corrections strongly affects the binding of both anthracene/Ag(111) and pentacene/Ag(111), yielding adsorption heights and energies more consistent with the experimental results than standard DFT calculations with generalized gradient approximation (GGA). For anthracene/Ag(111) the effect of the vdW interactions is even more dramatic: we found that "pure" DFT-GGA calculations (without including vdW corrections) result in preference for a tilted configuration, in contrast to the experimental observations of flat-lying adsorption; including vdW corrections, on the other hand, alters the binding geometry of anthracene/Ag(111), favoring the flat configuration. The electronic structure obtained using a self-consistent vdW scheme was found to be nearly indistinguishable from the conventional DFT electronic structure once the correct vdW geometry is employed for these physisorbed systems. Moreover, we show that a vdW correction scheme based on a hybrid functional DFT calculation (HSE) results in an improved description of the highest occupied molecular level of the adsorbed molecules.
Zhu, Timothy C; Friedberg, Joseph S; Dimofte, Andrea; Miles, Jeremy; Metz, James; Glatstein, Eli; Hahn, Stephen M
2002-06-06
An isotropic detector-based system was compared with a flat photodiode-based system in patients undergoing pleural photodynamic therapy. Isotropic and flat detectors were placed side by side in the chest cavity, for simultaneous in vivo dosimetry at surface locations for twelve patients. The treatment used 630nm laser to a total light irradiance of 30 J/cm 2 (measured with the flat photodiodes) with photofrin® IV as the photosensitizer. Since the flat detectors were calibrated at 532nm, wavelength correction factors (WCF) were used to convert the calibration to 630nm (WCF between 0.542 and 0.703). The mean ratio between isotropic and flat detectors for all sites was linear to the accumulated fluence and was 3.4±0.6 or 2.1±0.4, with or without the wavelength correction for the flat detectors, respectively. The μ eff of the tissues was estimated to vary between 0.5 to 4.3 cm -1 for four sites (Apex, Posterior Sulcus, Anterior Chest Wall, and Posterior Mediastinum) assuming μ s ' = 7 cm -1 . Insufficient information was available to estimate μ eff directly for three other sites (Anterior Sulcus, Posterior Chest Wall, and Pericardium) primarily due to limited sample size, although one may assume the optical penetration in all sites to vary in the same range (0.5 to 4.3 cm -1 ).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sloop, Christy
2013-04-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 569: Area 3 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 569 comprises the following nine corrective action sites (CASs): • 03-23-09, T-3 Contamination Area • 03-23-10, T-3A Contamination Area • 03-23-11, T-3B Contamination Area • 03-23-12, T-3S Contamination Area • 03-23-13, T-3T Contamination Area • 03-23-14, T-3V Contamination Area • 03-23-15, S-3G Contamination Area • 03-23-16, S-3H Contamination Area • 03-23-21, Pike Contamination Area The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supportingmore » the recommendation that no further corrective action is needed for CAU 569 based on the implementation of the corrective actions listed in Table ES-2.« less
Dilatation-dissipation corrections for advanced turbulence models
NASA Technical Reports Server (NTRS)
Wilcox, David C.
1992-01-01
This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.
Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A
2009-11-07
Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).
Extracting flat-field images from scene-based image sequences using phase correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.
Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less
Happily, Ever After: The Resilience of the Fairy Tale, Part 1.
ERIC Educational Resources Information Center
Hearn, Michael Patrick
1998-01-01
Discusses the work of Frenchman Charles Perrault, the seminal figure in fairy tales, and puts it in context of the French fairy tale fashion. Describes how the fairy tale came to England. Describes how the Germans revived the fairy tale at the end of the 18th century, and discusses the work of the Brothers Grimm. (SR)
Rodriguez, Brian D.; Sweetkind, Don; Burton, Bethany L.
2010-01-01
The U.S. Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office (NSO) are addressing groundwater contamination resulting from historical underground nuclear testing through the Environmental Management program and, in particular, the Underground Test Area (UGTA) project. From 1951 to 1992, 828 underground nuclear tests were conducted at the Nevada Test Site (NTS) northwest of Las Vegas (DOE UGTA, 2003). Most of these tests were conducted hundreds of feet above the groundwater table; however, more than 200 of the tests were near, or within, the water table. This underground testing was limited to specific areas of the NTS including Pahute Mesa, Rainier Mesa/Shoshone Mountain, Frenchman Flat, and Yucca Flat. Volcanic composite units make up much of the area within the Pahute Mesa Corrective Action Unit (CAU) at the NTS, Nevada. The extent of many of these volcanic composite units extends throughout and south of the primary areas of past underground testing at Pahute and Rainier Mesas. As situated, these units likely influence the rate and direction of groundwater flow and radionuclide transport. Currently, these units are poorly resolved in terms of their hydrologic properties introducing large uncertainties into current CAU-scale flow and transport models. In 2007, the U.S. Geological Survey (USGS), in cooperation with DOE and NNSA-NSO acquired three-dimensional (3-D) tensor magnetotelluric data at the NTS in Area 20 of Pahute Mesa CAU. A total of 20 magnetotelluric recording stations were established at about 600-m spacing on a 3-D array and were tied to ER20-6 well and other nearby well control (fig. 1). The purpose of this survey was to determine if closely spaced 3-D resistivity measurements can be used to characterize the distribution of shallow (600- to 1,500-m-depth range) devitrified rhyolite lava-flow aquifers (LFA) and zeolitic tuff confining units (TCU) in areas of limited drill hole control on Pahute Mesa within the Calico Hills zeolitic volcanic composite unit (VCU), an important hydrostratigraphic unit in Area 20. The resistivity response was evaluated and compared with existing well data and hydrogeologic unit tops from the current Pahute Mesa framework model. In 2008, the USGS processed and inverted the magnetotelluric data into a 3-D resistivity model. We interpreted nine depth slices and four west-east profile cross sections of the 3-D resistivity inversion model. This report documents the geologic interpretation of the 3-D resistivity model. Expectations are that spatial variations in the electrical properties of the Calico Hills zeolitic VCU can be detected and mapped with 3-D resistivity, and that these changes correlate to differences in rock permeability. With regard to LFA and TCU, electrical resistivity and permeability are typically related. Tuff confining units will typically have low electrical resistivity and low permeability, whereas LFA will have higher electrical resistivity and zones of higher fracture-related permeability. If expectations are shown to be correct, the method can be utilized by the UGTA scientists to refine the hydrostratigraphic unit (HSU) framework in an effort to more accurately predict radionuclide transport away from test areas on Pahute and Rainier Mesas.
The effect of a scanning flat fold mirror on a cosmic microwave background B-mode experiment.
Grainger, William F; North, Chris E; Ade, Peter A R
2011-06-01
We investigate the possibility of using a flat-fold beam steering mirror for a cosmic microwave background B-mode experiment. An aluminium flat-fold mirror is found to add ∼0.075% polarization, which varies in a scan synchronous way. Time-domain simulations of a realistic scanning pattern are performed, and the effect on the power-spectrum illustrated, and a possible method of correction applied. © 2011 American Institute of Physics
NASA Technical Reports Server (NTRS)
Kandula, M.; Haddad, G. F.; Chen, R.-H.
2006-01-01
Three-dimensional Navier-Stokes computational fluid dynamics (CFD) analysis has been performed in an effort to determine thermal boundary layer correction factors for circular convective heat flux gauges (such as Schmidt-Boelter and plug type)mounted flush in a flat plate subjected to a stepwise surface temperature discontinuity. Turbulent flow solutions with temperature-dependent properties are obtained for a free stream Reynolds number of 1E6, and freestream Mach numbers of 2 and 4. The effect of gauge diameter and the plate surface temperature have been investigated. The 3-D CFD results for the heat flux correction factors are compared to quasi-21) results deduced from constant property integral solutions and also 2-D CFD analysis with both constant and variable properties. The role of three-dimensionality and of property variations on the heat flux correction factors has been demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shott, Gregory
2014-03-01
The Maintenance Plan for the Performance Assessments and Composite Analyses for the Area 3 and Area 5 Radioactive Waste Management Sites at the Nevada Test Site (National Security Technologies, LLC 2007a) requires an annual review to assess the adequacy of the performance assessments (PAs) and composite analyses (CAs), with the results submitted to the U.S. Department of Energy (DOE) Office of Environmental Management. The Disposal Authorization Statements for the Area 3 and Area 5 Radioactive Waste Management Sites (RWMSs) also require that such reviews be made and that secondary or minor unresolved issues be tracked and addressed as part ofmore » the maintenance plan (DOE 1999a, 2000). The U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office performed an annual review of the Area 3 and Area 5 RWMS PAs and CAs for fiscal year (FY) 2013. This annual summary report presents data and conclusions from the FY 2013 review, and determines the adequacy of the PAs and CAs. Operational factors (e.g., waste forms and containers, facility design, and waste receipts), closure plans, monitoring results, and research and development (R&D) activities were reviewed to determine the adequacy of the PAs. Likewise, the environmental restoration activities at the Nevada National Security Site (NNSS) relevant to the sources of residual radioactive material that are considered in the CAs, the land-use planning, and the results of the environmental monitoring and R&D activities were reviewed to determine the adequacy of the CAs. Important developments in FY 2013 include the following: • Development of a new Area 5 RWMS closure inventory estimate based on disposals through FY 2013 • Evaluation of new or revised waste streams by special analysis • Development of version 4.115 of the Area 5 RWMS GoldSim PA/CA model The Area 3 RWMS has been in inactive status since July 1, 2006, with the last shipment received in April 2006. The FY 2013 review of operations, facility design, closure plans, monitoring results, and R&D results for the Area 3 RWMS indicates no changes that would impact PA validity. The conclusion of the annual review is that all performance objectives can be met and the Area 3 RWMS PA remains valid. There is no need to the revise the Area 3 RWMS PA. Review of Area 5 RWMS operations, design, closure plans, monitoring results, and R&D activities indicates that no significant changes have occurred. The FY 2013 PA results, generated with the Area 5 RWMS v4.115 GoldSim PA model, indicate that there continues to be a reasonable expectation of meeting all performance objectives. The results and conclusions of the Area 5 RWMS PA are judged valid, and there is no need to the revise the PA. A review of changes potentially impacting the CAs indicates that no significant changes occurred in FY 2013. The continuing adequacy of the CAs was evaluated with the new models, and no significant changes that would alter the CAs results or conclusions were found. The revision of the Area 3 RWMS CA, which will include the Yucca Flat Underground Test Area (Corrective Action Unit [CAU] 97) source term, is scheduled for FY 2024, following the completion of the Corrective Action Decision Document/Corrective Action Plan in FY 2015. Inclusion of the Frenchman Flat Underground Test Area (CAU 98) results in the Area 5 RWMS CA is scheduled for FY 2016, pending the completion of the CAU 98 Closure Report in FY 2015. Near-term R&D efforts will focus on continuing development of the PA, CA, and inventory models for the Area 3 and Area 5 RWMS.« less
Stidd, D A; Theessen, H; Deng, Y; Li, Y; Scholz, B; Rohkohl, C; Jhaveri, M D; Moftakhar, R; Chen, M; Lopes, D K
2014-01-01
Flat panel detector CT images are degraded by streak artifacts caused by radiodense implanted materials such as coils or clips. A new metal artifacts reduction prototype algorithm has been used to minimize these artifacts. The application of this new metal artifacts reduction algorithm was evaluated for flat panel detector CT imaging performed in a routine clinical setting. Flat panel detector CT images were obtained from 59 patients immediately following cerebral endovascular procedures or as surveillance imaging for cerebral endovascular or surgical procedures previously performed. The images were independently evaluated by 7 physicians for metal artifacts reduction on a 3-point scale at 2 locations: immediately adjacent to the metallic implant and 3 cm away from it. The number of visible vessels before and after metal artifacts reduction correction was also evaluated within a 3-cm radius around the metallic implant. The metal artifacts reduction algorithm was applied to the 59 flat panel detector CT datasets without complications. The metal artifacts in the reduction-corrected flat panel detector CT images were significantly reduced in the area immediately adjacent to the implanted metal object (P = .05) and in the area 3 cm away from the metal object (P = .03). The average number of visible vessel segments increased from 4.07 to 5.29 (P = .1235) after application of the metal artifacts reduction algorithm to the flat panel detector CT images. Metal artifacts reduction is an effective method to improve flat panel detector CT images degraded by metal artifacts. Metal artifacts are significantly decreased by the metal artifacts reduction algorithm, and there was a trend toward increased vessel-segment visualization. © 2014 by American Journal of Neuroradiology.
A simple method for estimating frequency response corrections for eddy covariance systems
W. J. Massman
2000-01-01
A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...
Analysis of interacting entropy-corrected holographic and new agegraphic dark energies
NASA Astrophysics Data System (ADS)
Ranjit, Chayan; Debnath, Ujjal
In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.
NASA Astrophysics Data System (ADS)
Kilian, Gladiné; Pieter, Muyshondt; Joris, Dirckx
2016-06-01
Laser Doppler Vibrometry is an intrinsic highly linear measurement technique which makes it a great tool to measure extremely small nonlinearities in the vibration response of a system. Although the measurement technique is highly linear, other components in the experimental setup may introduce nonlinearities. An important source of artificially introduced nonlinearities is the speaker, which generates the stimulus. In this work, two correction methods to remove the effects of stimulus nonlinearity are investigated. Both correction methods were found to give similar results but have different pros and cons. The aim of this work is to investigate the importance of the conical shape of the eardrum as a source of nonlinearity in hearing. We present measurements on flat and indented membranes. The data shows that the curved membrane exhibit slightly higher levels of nonlinearity compared to the flat membrane.
Removing ring artefacts from synchrotron radiation-based hard x-ray tomography data
NASA Astrophysics Data System (ADS)
Thalmann, Peter; Bikis, Christos; Schulz, Georg; Paleo, Pierre; Mirone, Alessandro; Rack, Alexander; Siegrist, Stefan; Cörek, Emre; Huwyler, Jörg; Müller, Bert
2017-09-01
In hard X-ray microtomography, ring artefacts regularly originate from incorrectly functioning pixel elements on the detector or from particles and scratches on the scintillator. We show that due to the high sensitivity of contemporary beamline setups further causes inducing inhomogeneities in the impinging wavefronts have to be considered. We propose in this study a method to correct the thereby induced failure of simple flatfield approaches. The main steps of the pipeline are (i) registration of the reference images with the radiographs (projections), (ii) integration of the flat-field corrected projection over the acquisition angle, (iii) high-pass filtering of the integrated projection, (iv) subtraction of filtered data from the flat-field corrected projections. The performance of the protocol is tested on data sets acquired at the beamline ID19 at ESRF using single distance phase tomography.
Turbine Engine Component Analysis: Cantilevered Composite Flat Plate Analysis
1989-11-01
4/5 element which translates into the ADIN. shell element (Type 7) with thickness correction. PATADI automatically generates midsurface normal vectors...for each node referenced by a shell element. Using thickness correction, the element thickness will be oriented along the midsurface direction. If no
Hydrologic resources management program and underground test area FY 1999 progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D K; Eaton, G F; Rose, T P
2000-07-01
This report presents the results from fiscal year (FY) 1999 technical studies conducted by Lawrence Livermore National Laboratory (LLNL) as part of the Hydrology and Radionuclide Migration Program (HRMP) and Underground Test Area (UGTA) work-for-others project. This report is the latest in a series of annual reports published by LLNL to document the migration of radionuclides and controls of radionuclide movement at the Nevada Test Site. The FY 1999 studies highlighted in this report are: (1) Chapter 1 provides the results from flow-through leaching of nuclear melt glasses at 25 C and near-neutral pH using dilute bicarbonate groundwaters. (2) Chaptermore » 2 reports on a summary of the size and concentration of colloidal material in NTS groundwaters. (3) Chapter 3 discusses the collaboration between LLNL/ANCD (Analytical and Nuclear Chemistry Division) and the Center for Accelerator Mass Spectrometry (CAMS) to develop a technique for analyzing NTS groundwater for 99-Technicium ({sup 99}Tc) using accelerator mass spectrometry (AMS). Since {sup 99}Tc is conservative like tritium in groundwater systems, and is not sorbed to geologic material, it has the potential for being an important tool for radionuclide migration studies. (4) Chapter 4 presents the results of secondary ion mass spectrometry measurements of the in-situ distribution of radionuclides in zeolitized tuffs from cores taken adjacent to nuclear test cavities and chimneys. In-situ measurements provide insight to the distribution of specific radionuclides on a micro-scale, mineralogical controls of radionuclide sorption, and identification of migration pathways (i.e., matrix diffusion, fractures). (5) Chapter 5 outlines new analytical techniques developed in LLNL/ANCD to study hydrologic problems at the NTS using inductively coupled plasma mass spectrometry (ICP-MS). With costs for thermal-ionization mass spectrometry (TIMS) increasing relative to sample preparation time and facility support, ICP-MS technology provides a means for rapidly measuring dilute concentrations of radionuclides with precision and abundance sensitivity comparable to TIMS. (6) Chapter 6 provides results of a characterization study of alluvium collected from the U-1a complex approximately 300 meters below ground surface in Yucca Flat. The purpose of this investigation was to provide information on particle size, mineralogical context, the proportion of primary and secondary minerals, and the texture of the reactive surface area that could be used to accurately model radionuclide interactions within Nevada Test Site alluvial basins (i.e., Frenchman Flat and Yucca Flat).« less
NASA Astrophysics Data System (ADS)
Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md
2011-10-01
The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Patrick
2014-08-01
The purpose of this CADD/CR is to provide documentation and justification that no further corrective action is needed for the closure of CAU 571 based on the implementation of corrective actions. This includes a description of investigation activities, an evaluation of the data, and a description of corrective actions that were performed. The CAIP provides information relating to the scope and planning of the investigation. Therefore, that information will not be repeated in this document.
Large numbers hypothesis. IV - The cosmological constant and quantum physics
NASA Technical Reports Server (NTRS)
Adams, P. J.
1983-01-01
In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.
Color quality management in advanced flat panel display engines
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.
2003-01-01
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
Metrology of flat mirrors with a computer generated hologram
NASA Astrophysics Data System (ADS)
Pariani, Giorgio; Tresoldi, Daniela; Moschetti, Manuele; Riva, Marco; Bianco, Andrea; Zerbi, Filippo Maria
2014-07-01
We designed the interferometric test of a 300 mm flat mirror, based onto a spherical mirror and a dedicated CGH. The spherical beam of the interferometer is quasi collimated to the desired diameter by the spherical mirror, used slightly off-axis, and the CGH performs the residual wavefront correction. We performed tests on a 200 mm and 300 mm flat mirrors, and compared the results to the ones obtained by stitching, showing an accuracy well within the designed value. The possibility to calibrate the cavity by subtracting out the figure errors of the spherical mirror has also been evaluated.
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
NASA Technical Reports Server (NTRS)
1975-01-01
A general description of the leading edge/flat surface heating array is presented along with its components, assembly instructions, installation instructions, operation procedures, maintenance instructions, repair procedures, schematics, spare parts lists, engineering drawings of the array, and functional acceptance test log sheets. The proper replacement of components, correct torque values, step-by-step maintenance instructions, and pretest checkouts are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John McCord
2006-06-01
The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) initiated the Underground Test Area (UGTA) Project to assess and evaluate the effects of the underground nuclear weapons tests on groundwater beneath the Nevada Test Site (NTS) and vicinity. The framework for this evaluation is provided in Appendix VI, Revision No. 1 (December 7, 2000) of the Federal Facility Agreement and Consent Order (FFACO, 1996). Section 3.0 of Appendix VI ''Corrective Action Strategy'' of the FFACO describes the process that will be used to complete corrective actions specifically for the UGTA Project. The objective of themore » UGTA corrective action strategy is to define contaminant boundaries for each UGTA corrective action unit (CAU) where groundwater may have become contaminated from the underground nuclear weapons tests. The contaminant boundaries are determined based on modeling of groundwater flow and contaminant transport. A summary of the FFACO corrective action process and the UGTA corrective action strategy is provided in Section 1.5. The FFACO (1996) corrective action process for the Yucca Flat/Climax Mine CAU 97 was initiated with the Corrective Action Investigation Plan (CAIP) (DOE/NV, 2000a). The CAIP included a review of existing data on the CAU and proposed a set of data collection activities to collect additional characterization data. These recommendations were based on a value of information analysis (VOIA) (IT, 1999), which evaluated the value of different possible data collection activities, with respect to reduction in uncertainty of the contaminant boundary, through simplified transport modeling. The Yucca Flat/Climax Mine CAIP identifies a three-step model development process to evaluate the impact of underground nuclear testing on groundwater to determine a contaminant boundary (DOE/NV, 2000a). The three steps are as follows: (1) Data compilation and analysis that provides the necessary modeling data that is completed in two parts: the first addressing the groundwater flow model, and the second the transport model. (2) Development of a groundwater flow model. (3) Development of a groundwater transport model. This report presents the results of the first part of the first step, documenting the data compilation, evaluation, and analysis for the groundwater flow model. The second part, documentation of transport model data will be the subject of a separate report. The purpose of this document is to present the compilation and evaluation of the available hydrologic data and information relevant to the development of the Yucca Flat/Climax Mine CAU groundwater flow model, which is a fundamental tool in the prediction of the extent of contaminant migration. Where appropriate, data and information documented elsewhere are summarized with reference to the complete documentation. The specific task objectives for hydrologic data documentation are as follows: (1) Identify and compile available hydrologic data and supporting information required to develop and validate the groundwater flow model for the Yucca Flat/Climax Mine CAU. (2) Assess the quality of the data and associated documentation, and assign qualifiers to denote levels of quality. (3) Analyze the data to derive expected values or spatial distributions and estimates of the associated uncertainty and variability.« less
[Early flat colorectal cancer].
Castelletto, R H; Chiarenza, C; Ottino, A; Garay, M L
1991-01-01
We report three cases of flat early colorectal carcinoma which were detected during the examination of 51 surgical specimens of colorectal resection. Two of them were endoscopically diagnosed, but the smallest one was not seen in the luminal instrumental examination. From the bibliographic analysis and our own experience we deduce the importance of flat lesions in the development of early colorectal carcinoma, either originated from pre-existent adenoma or de novo. Flat variants of adenoma, and presumably flush or depressed ones, must be considered as important factors in the early sequence adenoma-cancer. An appropriate endoscopic equipment with employment of additional staining techniques (such as carmine indigo and methylene blue) and the correct investigation during inflation-deflation procedures facilitates the identification of small lesions, their eradication and prevention from advanced forms of colorectal carcinoma.
Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U
2013-09-01
To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Robert
The Underground Test Area (UGTA) Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, in the northeast part of the Nevada National Security Site (NNSS) requires environmental corrective action activities to assess contamination resulting from underground nuclear testing. These activities are necessary to comply with the UGTA corrective action strategy (referred to as the UGTA strategy). The corrective action investigation phase of the UGTA strategy requires the development of groundwater flow and contaminant transport models whose purpose is to identify the lateral and vertical extent of contaminant migration over the next 1,000 years. In particular, the goal is to calculate themore » contaminant boundary, which is defined as a probabilistic model-forecast perimeter and a lower hydrostratigraphic unit (HSU) boundary that delineate the possible extent of radionuclide-contaminated groundwater from underground nuclear testing. Because of structural uncertainty in the contaminant boundary, a range of potential contaminant boundaries was forecast, resulting in an ensemble of contaminant boundaries. The contaminant boundary extent is determined by the volume of groundwater that has at least a 5 percent chance of exceeding the radiological standards of the Safe Drinking Water Act (SDWA) (CFR, 2012).« less
Liang, Jinyang; Kohn, Rudolph N; Becker, Michael F; Heinzen, Daniel J
2009-04-01
We demonstrate a digital micromirror device (DMD)-based optical system that converts a spatially noisy quasi-Gaussian to an eighth-order super-Lorentzian flat-top beam. We use an error-diffusion algorithm to design the binary pattern for the Texas Instruments DLP device. Following the DMD, a telescope with a pinhole low-pass filters the beam and scales it to the desired sized image. Experimental measurements show a 1% root-mean-square (RMS) flatness over a diameter of 0.28 mm in the center of the flat-top beam and better than 1.5% RMS flatness over its entire 1.43 mm diameter. The power conversion efficiency is 37%. We develop an alignment technique to ensure that the DMD pattern is correctly positioned on the incident beam. An interferometric measurement of the DMD surface flatness shows that phase uniformity is maintained in the output beam. Our approach is highly flexible and is able to produce not only flat-top beams with different parameters, but also any slowly varying target beam shape. It can be used to generate the homogeneous optical lattice required for Bose-Einstein condensate cold atom experiments.
Quantum corrections for spinning particles in de Sitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu
We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less
Communication: Two types of flat-planes conditions in density functional theory.
Yang, Xiaotian Derrick; Patel, Anand H G; Miranda-Quintana, Ramón Alain; Heidar-Zadeh, Farnaz; González-Espinoza, Cristina E; Ayers, Paul W
2016-07-21
Using results from atomic spectroscopy, we show that there are two types of flat-planes conditions. The first type of flat-planes condition occurs when the energy as a function of the number of electrons of each spin, Nα and Nβ, has a derivative discontinuity on a line segment where the number of electrons, Nα + Nβ, is an integer. The second type of flat-planes condition occurs when the energy has a derivative discontinuity on a line segment where the spin polarization, Nα - Nβ, is an integer, but does not have a discontinuity associated with an integer number of electrons. Type 2 flat planes are rare-we observed just 15 type 2 flat-planes conditions out of the 4884 cases we tested-but their mere existence has implications for the design of exchange-correlation energy density functionals. To facilitate the development of functionals that have the correct behavior with respect to both fractional number of electrons and fractional spin polarization, we present a dataset for the chromium atom and its ions that can be used to test new functionals.
Evaluation of photomask flatness compensation for extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Ballman, Katherine; Lee, Christopher; Zimmerman, John; Dunn, Thomas; Bean, Alexander
2016-10-01
As the semiconductor industry continues to strive towards high volume manufacturing for EUV, flatness specifications for photomasks have decreased to below 10nm for 2018 production, however the current champion masks being produced report P-V flatness values of roughly 50nm. Write compensation presents the promising opportunity to mitigate pattern placement errors through the use of geometrically adjusted target patterns which counteract the reticle's flatness induced distortions and address the differences in chucking mechanisms between e-beam write and electrostatic clamping during scan. Compensation relies on high accuracy flatness data which provides the critical topographical components of the reticle to the write tool. Any errors included in the flatness data file are translated to the pattern during the write process, which has now driven flatness measurement tools to target a 6σ reproducibility <1nm. Using data collected from a 2011 Sematech study on the Alpha Demo Tool, the proposed methodology for write compensation is validated against printed wafer results. Topographic features which lack compensation capability must then be held to stringent specifications in order to limit their contributions to the final image placement error (IPE) at wafer. By understanding the capabilities and limitations of write compensation, it is then possible to shift flatness requirements towards the "non-correctable" portion of the reticle's profile, potentially relieving polishers from having to adhere to the current single digit flatness specifications.
Selective adsorption of a supramolecular structure on flat and stepped gold surfaces
NASA Astrophysics Data System (ADS)
Peköz, Rengin; Donadio, Davide
2018-04-01
Halogenated aromatic molecules assemble on surfaces forming both hydrogen and halogen bonds. Even though these systems have been intensively studied on flat metal surfaces, high-index vicinal surfaces remain challenging, as they may induce complex adsorbate structures. The adsorption of 2,6-dibromoanthraquinone (2,6-DBAQ) on flat and stepped gold surfaces is studied by means of van der Waals corrected density functional theory. Equilibrium geometries and corresponding adsorption energies are systematically investigated for various different adsorption configurations. It is shown that bridge sites and step edges are the preferred adsorption sites for single molecules on flat and stepped surfaces, respectively. The role of van der Waals interactions, halogen bonds and hydrogen bonds are explored for a monolayer coverage of 2,6-DBAQ molecules, revealing that molecular flexibility and intermolecular interactions stabilize two-dimensional networks on both flat and stepped surfaces. Our results provide a rationale for experimental observation of molecular carpeting on high-index vicinal surfaces of transition metals.
32. AERIAL VIEW OF THE ROCKY FLATS PLANT LOOKING NORTHWEST. ...
32. AERIAL VIEW OF THE ROCKY FLATS PLANT LOOKING NORTHWEST. DURING THE 1980S, A NUMBER OF COMPLAINTS CONCERNING SAFETY AND ENVIRONMENTAL ERRORS SURFACED, CULMINATING IN THE 1989 RAID ON THE PLANT BY THE FBI FOR ALLEGED ENVIRONMENTAL INFRACTIONS. THAT SAME YEAR, PRODUCTION AT THE PLANT WAS HALTED FOR CORRECTION OF SAFETY DEFICIENCIES. BY 1991, A SERIES OF EVENTS WORLDWIDE REDUCED THE COLD WAR THREAT, AND IN 1992, THE SECRETARY OF ENERGY ANNOUNCED THAT THE MISSION AT THE PLANT WOULD BE CHANGED TO ENVIRONMENTAL RESTORATION AND WASTE MANAGEMENT, WITH THE GOAL OF CLEANING UP THE PLANT AND SITE (1989). - Rocky Flats Plant, Bounded by Indiana Street & Routes 93, 128 & 72, Golden, Jefferson County, CO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shott, G.
2013-03-18
The Maintenance Plan for the Performance Assessments and Composite Analyses for the Area 3 and Area 5 Radioactive Waste Management Sites at the Nevada Test Site (National Security Technologies, LLC 2007a) requires an annual review to assess the adequacy of the performance assessments (PAs) and composite analyses (CAs), with the results submitted to the U.S. Department of Energy (DOE) Office of Environmental Management. The Disposal Authorization Statements for the Area 3 and Area 5 Radioactive Waste Management Sites (RWMSs) also require that such reviews be made and that secondary or minor unresolved issues be tracked and addressed as part ofmore » the maintenance plan (DOE 1999a, 2000). The U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office performed an annual review of the Area 3 and Area 5 RWMS PAs and CAs for fiscal year (FY) 2012. This annual summary report presents data and conclusions from the FY 2012 review, and determines the adequacy of the PAs and CAs. Operational factors (e.g., waste forms and containers, facility design, and waste receipts), closure plans, monitoring results, and research and development (R&D) activities were reviewed to determine the adequacy of the PAs. Likewise, the environmental restoration activities at the Nevada National Security Site (NNSS) relevant to the sources of residual radioactive material that are considered in the CAs, the land-use planning, and the results of the environmental monitoring and R&D activities were reviewed to determine the adequacy of the CAs. Important developments in FY 2012 include the following: Release of a special analysis for the Area 3 RWMS assessing the continuing validity of the PA and CA; Development of a new Area 5 RWMS closure inventory estimate based on disposals through FY 2012; Evaluation of new or revised waste streams by special analysis; and Development of version 4.114 of the Area 5 RWMS GoldSim PA model. The Area 3 RWMS has been in inactive status since July 1, 2006, with the last shipment received in April 2006. The FY 2012 review of operations, facility design, closure plans, monitoring results, and R&D results for the Area 3 RWMS indicates no changes that would impact PA validity. A special analysis using the Area 3 RWMS v2.102 GoldSim PA model was prepared to update the PA results for the Area 3 RWMS in FY 2012. The special analysis concludes that all performance objectives can be met and the Area 3 RWMS PA remains valid. There is no need to the revise the Area 3 RWMS PA. Review of Area 5 RWMS operations, design, closure plans, monitoring results, and R&D activities indicates no significant changes other than an increase in the inventory disposed. The FY 2012 PA results, generated with the Area 5 RWMS v4.114 GoldSim PA model, indicate that there continues to be a reasonable expectation of meeting all performance objectives. The results and conclusions of the Area 5 RWMS PA are judged valid, and there is no need to the revise the PA. A review of changes potentially impacting the CAs indicates that no significant changes occurred in FY 2012. The continuing adequacy of the CAs was evaluated with the new models, and no significant changes that would alter CA results or conclusions were found. The revision of the Area 3 RWMS CA, which will include the Underground Test Area source term (Corrective Action Unit [CAU] 97), is scheduled for FY 2024, following the completion of the Yucca Flat CAU 97 Corrective Action Decision Document/Corrective Action Plan in FY 2016. Inclusion of the Frenchman Flat CAU 98 results in the Area 5 RWMS CA is scheduled for FY 2016, pending the completion of the CAU 98 closure report in FY 2015. Near-term R&D efforts will focus on continuing development of the Area 3 and Area 5 RWMS GoldSim PA/CA and inventory models.« less
Optical design with Wood lenses
NASA Astrophysics Data System (ADS)
Caldwell, J. Brian
1991-01-01
Spherical aberration in a flat surfaced radial gradient-index lens (a Wood lens) with a parabolic index profile can be corrected by altering the profile to Include higher order terms. However this results in a large amowfl of third order coma. This paper presents an alternative method of aberration correction similar to that used in the catadiopthc Schmidtsystem. A Wood lens with a parabolic profile is used to provide all or most of the optical power. Coma is corrected by stop shifting and Spherical aberration is corrected by placing a powerless Wood lens corrector plate at the stop. 1.
Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges
NASA Technical Reports Server (NTRS)
Kandula, Max; Haddad, George
2007-01-01
This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant
Testing large flats with computer generated holograms
NASA Astrophysics Data System (ADS)
Pariani, Giorgio; Tresoldi, Daniela; Spanò, Paolo; Bianco, Andrea
2012-09-01
We describe the optical test of a large flat based on a spherical mirror and a dedicated CGH. The spherical mirror, which can be accurately manufactured and tested in absolute way, allows to obtain a quasi collimated light beam, and the hologram performs the residual wavefront correction. Alignment tools for the spherical mirror and the hologram itself are encoded in the CGH. Sensitivity to fabrication errors and alignment has been evaluated. Tests to verify the effectiveness of our approach are now under execution.
Self-Referencing Hartmann Test for Large-Aperture Telescopes
NASA Technical Reports Server (NTRS)
Korechoff, Robert P.; Oseas, Jeffrey M.
2010-01-01
A method is proposed for end-to-end, full aperture testing of large-aperture telescopes using an innovative variation of a Hartmann mask. This technique is practical for telescopes with primary mirrors tens of meters in diameter and of any design. Furthermore, it is applicable to the entire optical band (near IR, visible, ultraviolet), relatively insensitive to environmental perturbations, and is suitable for ambient laboratory as well as thermal-vacuum environments. The only restriction is that the telescope optical axis must be parallel to the local gravity vector during testing. The standard Hartmann test utilizes an array of pencil beams that are cut out of a well-corrected wavefront using a mask. The pencil beam array is expanded to fill the full aperture of the telescope. The detector plane of the telescope is translated back and forth along the optical axis in the vicinity of the nominal focal plane, and the centroid of each pencil beam image is recorded. Standard analytical techniques are then used to reconstruct the telescope wavefront from the centroid data. The expansion of the array of pencil beams is usually accomplished by double passing the beams through the telescope under test. However, this requires a well-corrected, autocollimation flat, the diameter or which is approximately equal to that of the telescope aperture. Thus, the standard Hartmann method does not scale well because of the difficulty and expense of building and mounting a well-corrected, large aperture flat. The innovation in the testing method proposed here is to replace the large aperture, well-corrected, monolithic autocollimation flat with an array of small-aperture mirrors. In addition to eliminating the need for a large optic, the surface figure requirement for the small mirrors is relaxed compared to that required of the large autocollimation flat. The key point that allows this method to work is that the small mirrors need to operate as a monolithic flat only with regard to tip/tilt and not piston because in collimated space piston has no effect on the image centroids. The problem of aligning the small mirrors in tip/tilt requires a two-part solution. First, each mirror is suspended from a two-axis gimbal. The orientation of the gimbal is maintained by gravity. Second, the mirror is aligned such that the mirror normal is parallel to gravity vector. This is accomplished interferometrically in a test fixture. Of course, the test fixture itself needs to be calibrated with respect to gravity.
VizieR Online Data Catalog: Optical/NIR light curves of SN 2009ib (Takats+, 2015)
NASA Astrophysics Data System (ADS)
Takats, K.; Pignata, G.; Pumo, M. L.; Paillas, E.; Zampieri, L.; Elias-Rosa, N.; Benetti, S.; Bufano, F.; Cappellaro, E.; Ergon, M.; Fraser, M.; Hamuy, M.; Inserra, C.; Kankare, E.; Smartt, S. J.; Stritzinger, M. D.; van Dyk, S. D.; Haislip, J. B.; Lacluyze, A. P.; Moore, J. P.; Reichart, D.
2017-11-01
Optical photometry was collected using multiple telescopes with UBVRI and u'g'r'i'z' filters, covering the phases between 13 and 262d after explosion. The basic reduction steps of the images (such as bias-subtraction, overscan-correction, flat-fielding) were carried out using the standard IRAF tasks. The photometric measurement of the SN was performed using the point-spread function (PSF) fitting technique via the SNOOPY package in IRAF. Near-infrared photometry was obtained using the Rapid Eye Mount (REM) telescope in JH bands. Dithered images of the SN field were taken in multiple sequences of five. The object images were dark- and flat-field corrected, combined to create sky images then the sky images were subtracted from the object images. The images were then registered and combined. (3 data files).
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKee, E.H.
Ground water flow through the region south and west of Frenchman Flat, in the Ash Meadows subbasin of the Death Valley ground water flow system, is controlled mostly by the distribution of permeable and impermeable rocks. Geologic structures such as faults are instrumental in arranging the distribution of the aquifer and aquitard rock units. Most permeability is in fractures caused by faulting in carbonate rocks. Large faults are more likely to reach the potentiometric surface about 325 meters below the ground surface and are more likely to effect the flow path than small faults. Thus field work concentrated on identifyingmore » large faults, especially where they cut carbonate rocks. Small faults, however, may develop as much permeability as large faults. Faults that are penetrative and are part of an anastomosing fault zone are particularly important. The overall pattern of faults and joints at the ground surface in the Spotted and Specter Ranges is an indication of the fracture system at the depth of the water table. Most of the faults in these ranges are west-southwest-striking, high-angle faults, 100 to 3500 meters long, with 10 to 300 /meters of displacement. Many of them, such as those in the Spotted Range and Rock Valley are left-lateral strike-slip faults that are conjugate to the NW-striking right-lateral faults of the Las Vegas Valley shear zone. These faults control the ground water flow path, which runs west-southwest beneath the Spotted Range, Mercury Valley and the Specter Range. The Specter Range thrust is a significant geologic structure with respect to ground water flow. This regional thrust fault emplaces siliceous clastic strata into the north central and western parts of the Specter Range.« less
Temperature-profile methods for estimating percolation rates in arid environments
Constantz, Jim; Tyler, Scott W.; Kwicklis, Edward
2003-01-01
Percolation rates are estimated using vertical temperature profiles from sequentially deeper vadose environments, progressing from sediments beneath stream channels, to expansive basin-fill materials, and finally to deep fractured bedrock underlying mountainous terrain. Beneath stream channels, vertical temperature profiles vary over time in response to downward heat transport, which is generally controlled by conductive heat transport during dry periods, or by advective transport during channel infiltration. During periods of stream-channel infiltration, two relatively simple approaches are possible: a heat-pulse technique, or a heat and liquid-water transport simulation code. Focused percolation rates beneath stream channels are examined for perennial, seasonal, and ephemeral channels in central New Mexico, with estimated percolation rates ranging from 100 to 2100 mm d−1 Deep within basin-fill and underlying mountainous terrain, vertical temperature gradients are dominated by the local geothermal gradient, which creates a profile with decreasing temperatures toward the surface. If simplifying assumptions are employed regarding stratigraphy and vapor fluxes, an analytical solution to the heat transport problem can be used to generate temperature profiles at specified percolation rates for comparison to the observed geothermal gradient. Comparisons to an observed temperature profile in the basin-fill sediments beneath Frenchman Flat, Nevada, yielded water fluxes near zero, with absolute values <10 mm yr−1 For the deep vadose environment beneath Yucca Mountain, Nevada, the complexities of stratigraphy and vapor movement are incorporated into a more elaborate heat and water transport model to compare simulated and observed temperature profiles for a pair of deep boreholes. Best matches resulted in a percolation rate near zero for one borehole and 11 mm yr−1 for the second borehole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shott, G.; Yucel, V.; Desotell, L.
2006-07-01
The long-term safety of U.S. Department of Energy (DOE) low-level radioactive disposal facilities is assessed by conducting a performance assessment -- a systematic analysis that compares estimated risks to the public and the environment with performance objectives contained in DOE Manual 435.1-1, Radioactive Waste Management Manual. Before site operations, facilities design features such as final inventory, waste form characteristics, and closure cover design may be uncertain. Site operators need a modeling tool that can be used throughout the operational life of the disposal site to guide decisions regarding the acceptance of problematic waste streams, new disposal cell design, environmental monitoringmore » program design, and final site closure. In response to these needs the National Nuclear Security Administration Nevada Site Office (NNSA/NSO) has developed a decision support system for the Area 5 Radioactive Waste Management Site in Frenchman Flat on the Nevada Test Site. The core of the system is a probabilistic inventory and performance assessment model implemented in the GoldSim{sup R} simulation platform. The modeling platform supports multiple graphic capabilities that allow clear documentation of the model data sources, conceptual model, mathematical implementation, and results. The combined models have the capability to estimate disposal site inventory, contaminant concentrations in environmental media, and radiological doses to members of the public engaged in various activities at multiple locations. The model allows rapid assessment and documentation of the consequences of waste management decisions using the most current site characterization information, radionuclide inventory, and conceptual model. The model is routinely used to provide annual updates of site performance, evaluate the consequences of disposal of new waste streams, develop waste concentration limits, optimize the design of new disposal cells, and assess the adequacy of environmental monitoring programs. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Techane, Sirnegeda D.; Baer, Donald R.; Castner, David G.
2011-09-01
Quantitative analysis of the 16-mercaptohexadecanoic acid self-assembled monolayer (C16 COOH-SAM) layer thickness on gold nanoparticles (AuNPs) was performed using simulation of electron spectra for surface analysis (SESSA) and x-ray photoelectron spectroscopy (XPS). XPS measurements of C16 COOH SAMs on flat gold surfaces were made at 9 different photoelectron take-off angles (5o to 85o in 5o increments), corrected using geometric weighting factors and then summed together to approximate spherical AuNPs. The SAM thickness and relative surface roughness (RSA) in SESSA were optimized to determine the best agreement between simulated and experimental surface composition. Based on the glancing angle results, it wasmore » found that inclusion of a hydrocarbon contamination layer on top the C16 COOH-SAM was necessary to improve the agreement between the SESSA and XPS results. For the 16 COOH-SAMs on flat Au surfaces, using a SAM thickness of 1.1Å/CH2 group, an RSA of 1.05 and a 1.5Å CH2-contamination overlayer (total film thickness = 21.5Å) for the SESSA calculations provided the best agreement with the experimental XPS data. After applying the appropriate geometric corrections and summing the SESSA flat surface compositions, the best fit results for the 16 COOH-SAM thickness and surface roughness on the AuNPs were determined to be 0.9Å/CH2 group and 1.06 RSA with a 1.5Å CH2-contamination overlayer (total film thickness = 18.5Å). The three angstrom difference in SAM thickness between the flat Au and AuNP surfaces suggests the alkyl chains of the SAM are slightly more tilted or disordered on the AuNP surfaces.« less
Slow-roll corrections in multi-field inflation: a separate universes approach
NASA Astrophysics Data System (ADS)
Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan
2018-05-01
In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.
Solar cooling system performance, Frenchman's Reef Hotel, Virgin Islands
NASA Astrophysics Data System (ADS)
Harber, H.
1981-09-01
The operational and thermal performance of a variety of solar systems are described. The Solar Cooling System was installed in a hotel at St. Thomas, U. S. Virgin Islands. The system consists of the evacuated glass tube collectors, two 2500 gallon tanks, pumps, computerized controller, a large solar optimized industrial sized lithium bromide absorption chiller, and associated plumbing. Solar heated water is pumped through the system to the designed public areas such as lobby, lounges, restaurant and hallways. Auxiliary heat is provided by steam and a heat exchanger to supplement the solar heat.
Solar cooling system performance, Frenchman's Reef Hotel, Virgin Islands
NASA Technical Reports Server (NTRS)
Harber, H.
1981-01-01
The operational and thermal performance of a variety of solar systems are described. The Solar Cooling System was installed in a hotel at St. Thomas, U. S. Virgin Islands. The system consists of the evacuated glass tube collectors, two 2500 gallon tanks, pumps, computerized controller, a large solar optimized industrial sized lithium bromide absorption chiller, and associated plumbing. Solar heated water is pumped through the system to the designed public areas such as lobby, lounges, restaurant and hallways. Auxiliary heat is provided by steam and a heat exchanger to supplement the solar heat.
Communication: Two types of flat-planes conditions in density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaotian Derrick; Patel, Anand H. G.; González-Espinoza, Cristina E.
Using results from atomic spectroscopy, we show that there are two types of flat-planes conditions. The first type of flat-planes condition occurs when the energy as a function of the number of electrons of each spin, N{sub α} and N{sub β}, has a derivative discontinuity on a line segment where the number of electrons, N{sub α} + N{sub β}, is an integer. The second type of flat-planes condition occurs when the energy has a derivative discontinuity on a line segment where the spin polarization, N{sub α} – N{sub β}, is an integer, but does not have a discontinuity associated withmore » an integer number of electrons. Type 2 flat planes are rare—we observed just 15 type 2 flat-planes conditions out of the 4884 cases we tested—but their mere existence has implications for the design of exchange-correlation energy density functionals. To facilitate the development of functionals that have the correct behavior with respect to both fractional number of electrons and fractional spin polarization, we present a dataset for the chromium atom and its ions that can be used to test new functionals.« less
Effects of Transducer Installation on Unsteady Pressure Measurements on Oscillating Blades
NASA Technical Reports Server (NTRS)
Lepicovsky, Jan
2006-01-01
Unsteady pressures were measured above the suction side of a blade that was oscillated to simulate blade stall flutter. Measurements were made at blade oscillation frequencies up to 500 Hz. Two types of miniature pressure transducers were used: surface-mounted flat custom-made, and conventional miniature, body-mounted transducers. The signals of the surface-mounted transducers are significantly affected by blade acceleration, whereas the signals of body-mounted transducers are practically free of this distortion. A procedure was introduced to correct the signals of surface-mounted transducers to rectify the signal distortion due to blade acceleration. The signals from body-mounted transducers, and corrected signals from surface-mounted transducers represent true unsteady pressure signals on the surface of a blade subjected to forced oscillations. However, the use of body-mounted conventional transducers is preferred for the following reasons: no signal corrections are needed for blade acceleration, the conventional transducers are noticeably less expensive than custom-made flat transducers, the survival rate of body-mounted transducers is much higher, and finally installation of body-mounted transducers does not disturb the blade surface of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John McCord
2007-09-01
This report documents transport data and data analyses for Yucca Flat/Climax Mine CAU 97. The purpose of the data compilation and related analyses is to provide the primary reference to support parameterization of the Yucca Flat/Climax Mine CAU transport model. Specific task objectives were as follows: • Identify and compile currently available transport parameter data and supporting information that may be relevant to the Yucca Flat/Climax Mine CAU. • Assess the level of quality of the data and associated documentation. • Analyze the data to derive expected values and estimates of the associated uncertainty and variability. The scope of thismore » document includes the compilation and assessment of data and information relevant to transport parameters for the Yucca Flat/Climax Mine CAU subsurface within the context of unclassified source-term contamination. Data types of interest include mineralogy, aqueous chemistry, matrix and effective porosity, dispersivity, matrix diffusion, matrix and fracture sorption, and colloid-facilitated transport parameters.« less
Open-loop correction for an eddy current dominated beam-switching magnet.
Koseki, K; Nakayama, H; Tawada, M
2014-04-01
A beam-switching magnet and the pulsed power supply it requires have been developed for the Japan Proton Accelerator Research Complex. To switch bunched proton beams, the dipole magnetic field must reach its maximum value within 40 ms. In addition, the field flatness should be less than 5 × 10(-4) to guide each bunched beam to the designed orbit. From a magnetic field measurement by using a long search coil, it was found that an eddy current in the thick endplates and laminated core disturbs the rise of the magnetic field. The eddy current also deteriorates the field flatness over the required flat-top period. The measured field flatness was 5 × 10(-3). By using a double-exponential equation to approximate the measured magnetic field, a compensation pattern for the eddy current was calculated. The integrated magnetic field was measured while using the newly developed open-loop compensation system. A field flatness of less than 5 × 10(-4), which is an acceptable value, was achieved.
Open-loop correction for an eddy current dominated beam-switching magnet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koseki, K., E-mail: kunio.koseki@kek.jp; Nakayama, H.; Tawada, M.
2014-04-15
A beam-switching magnet and the pulsed power supply it requires have been developed for the Japan Proton Accelerator Research Complex. To switch bunched proton beams, the dipole magnetic field must reach its maximum value within 40 ms. In addition, the field flatness should be less than 5 × 10{sup −4} to guide each bunched beam to the designed orbit. From a magnetic field measurement by using a long search coil, it was found that an eddy current in the thick endplates and laminated core disturbs the rise of the magnetic field. The eddy current also deteriorates the field flatness over the requiredmore » flat-top period. The measured field flatness was 5 × 10{sup −3}. By using a double-exponential equation to approximate the measured magnetic field, a compensation pattern for the eddy current was calculated. The integrated magnetic field was measured while using the newly developed open-loop compensation system. A field flatness of less than 5 × 10{sup −4}, which is an acceptable value, was achieved.« less
Gravitational Lensing Corrections in Flat ΛCDM Cosmology
NASA Astrophysics Data System (ADS)
Kantowski, Ronald; Chen, Bin; Dai, Xinyu
2010-08-01
We compute the deflection angle to order (m/r 0)2 and m/r 0 × Λr 2 0 for a light ray traveling in a flat ΛCDM cosmology that encounters a completely condensed mass region. We use a Swiss cheese model for the inhomogeneities and find that the most significant correction to the Einstein angle occurs not because of the nonlinear terms but instead occurs because the condensed mass is embedded in a background cosmology. The Swiss cheese model predicts a decrease in the deflection angle of ~2% for weakly lensed galaxies behind the rich cluster A1689 and that the reduction can be as large as ~5% for similar rich clusters at z ≈ 1. Weak-lensing deflection angles caused by galaxies can likewise be reduced by as much as ~4%. We show that the lowest order correction in which Λ appears is proportional to m/r_0× √{Λ r_0^2}}} and could cause as much as a ~0.02% increase in the deflection angle for light that passes through a rich cluster. The lowest order nonlinear correction in the mass is proportional to m/r_0× √{m/r_0} and can increase the deflection angle by ~0.005% for weak lensing by galaxies.
NASA Astrophysics Data System (ADS)
Hendi, S. H.; Panahiyan, S.
2014-12-01
Motivated by the string corrections on the gravity and electrodynamics sides, we consider a quadratic Maxwell invariant term as a correction of the Maxwell Lagrangian to obtain exact solutions of higher dimensional topological black holes in Gauss-Bonnet gravity. We first investigate the asymptotically flat solutions and obtain conserved and thermodynamic quantities which satisfy the first law of thermodynamics. We also analyze thermodynamic stability of the solutions by calculating the heat capacity and the Hessian matrix. Then, we focus on horizon-flat solutions with an anti-de Sitter (AdS) asymptote and produce a rotating spacetime with a suitable transformation. In addition, we calculate the conserved and thermodynamic quantities for asymptotically AdS black branes which satisfy the first law of thermodynamics. Finally, we perform thermodynamic instability criterion to investigate the effects of nonlinear electrodynamics in canonical and grand canonical ensembles.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-07
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
NASA Astrophysics Data System (ADS)
Guelton, Nicolas; Lopès, Catherine; Sordini, Henri
2016-08-01
In hot dip galvanizing lines, strip bending around the sink roll generates a flatness defect called crossbow. This defect affects the cross coating weight distribution by changing the knife-to-strip distance along the strip width and requires a significant increase in coating target to prevent any risk of undercoating. The already-existing coating weight control system succeeds in eliminating both average and skew coating errors but cannot do anything against crossbow coating errors. It has therefore been upgraded with a flatness correction function which takes advantage of the possibility of controlling the electromagnetic stabilizer. The basic principle is to split, for every gage scan, the coating weight cross profile of the top and bottom sides into two, respectively, linear and non-linear components. The linear component is used to correct the skew error by realigning the knives with the strip, while the non-linear component is used to distort the strip in the stabilizer in such a way that the strip is kept flat between the knives. Industrial evaluation is currently in progress but the first results have already shown that the strip can be significantly flattened between the knives and the production tolerances subsequently tightened without compromising quality.
NASA Astrophysics Data System (ADS)
Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.
2018-06-01
Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of these hydrogeological systems, such as alluvial fans or karstified evaporites that are frequently associated with the salt flats, is enough to greatly modify the geometry of the saline interface.
NASA Astrophysics Data System (ADS)
Kegerise, Michael A.; Rufer, Shann J.
2016-08-01
In this paper, we report on the application of the atomic layer thermopile (ALTP) heat-flux sensor to the measurement of laminar-to-turbulent transition in a hypersonic flat-plate boundary layer. The centerline of the flat-plate model was instrumented with a streamwise array of ALTP sensors, and the flat-plate model was exposed to a Mach 6 freestream over a range of unit Reynolds numbers. Here, we observed an unstable band of frequencies that are associated with second-mode instability waves in the laminar boundary layer that forms on the flat-plate surface. The measured frequencies, group velocities, phase speeds, and wavelengths of these instability waves are consistent with data previously reported in the literature. Heat flux time series, and the Morlet wavelet transforms of them, revealed the wave-packet nature of the second-mode instability waves. In addition, a laser-based radiative heating system was used to measure the frequency response functions (FRF) of the ALTP sensors used in the wind tunnel test. These measurements were used to assess the stability of the sensor FRFs over time and to correct spectral estimates for any attenuation caused by the finite sensor bandwidth.
AXAF Alignment Test System Autocollimating Flat Error Correction
NASA Technical Reports Server (NTRS)
Lewis, Timothy S.
1995-01-01
The alignment test system for the advanced x ray astrophysics facility (AXAF) high-resolution mirror assembly (HRMA) determines the misalignment of the HRMA by measuring the displacement of a beam of light reflected by the HRMA mirrors and an autocollimating flat (ACF). This report shows how to calibrate the system to compensate for errors introduced by the ACF, using measurements taken with the ACF in different positions. It also shows what information can be obtained from alignment test data regarding errors in the shapes of the HRMA mirrors. Simulated results based on measured ACF surface data are presented.
Study of a wide-aperture combined deformable mirror for high-power pulsed phosphate glass lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samarkin, V V; Aleksandrov, A G; Romanov, P N
2015-12-31
A deformable mirror with the size of 410 × 468 mm controlled by bimorph piezoceramic plates and multilayer piezo stacks is developed. The response functions of individual actuators and the measurements of the flatness of the deformable mirror surface are presented. The study of mirrors with an interferometer and a wavefront sensor has shown that it is possible to improve the surface flatness down to a residual roughness of 0.033 μm (RMS). The possibility of correction of beam aberrations in an ultra-high-power laser using the created bimorph mirror is demonstrated. (letters)
Takahata, Masahiko; Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Sudo, Hideki; Ohshima, Shigeki; Minami, Akio
2007-08-01
Retrospective study. To compare the surgical outcomes of posterior translational correction and fusion using hybrid instrumentation systems with either sublaminar Nesplon tape or sublaminar metal wire to treat adolescent idiopathic scoliosis (AIS). Nesplon tape, which consists of a thread of ultra-high molecular weight polyethylene fibers, has advantages over metal wire: (1) its soft and flexible properties avoid neural damage and (2) its flat configuration avoids focal distribution of the stresses to lamina; however, the efficacy of Nesplon tape in the correction of spinal deformity is as yet, still unclear. Thirty AIS patients at a single institution underwent posterior correction and fusion using hybrid instrumentation containing hook, pedicle screw, and either sublaminar polyethylene taping (15) or sublaminar metal wiring (15). Patients were evaluated preoperatively, immediately after surgery, and at a 2-year follow-up according to the radiographic changes in curve correction, operating time, intraoperative blood loss, complications, and the Scoliosis Research Society patient questionnaire (SRS-24) score. The average correction rate was 63.0% in the Nesplon tape group and 59.9% in the metal wire group immediately after surgery (P = 0.62). Fusion was obtained in all the patients without significant correction loss in both groups. There was no significant difference in operative time, intraoperative blood loss, and postoperative SRS-24 scores between the 2 groups. Complications were superficial skin infection in a single patient in the Nesplon tape group, and transient sensory disturbance in 1 patient and temporal superior mesenteric artery syndrome in another patient in the metal wire group. The efficacy of Nesplon tape in correction of deformity is equivalent to that of metal wire, and fusion was completed without significant correction loss. The soft and flexible properties and flat configuration of Nesplon tape make this a safe application for the treatment of AIS with bone fragility or with the fusion areas containing the spinal cord.
Chordwise and compressibility corrections to slender-wing theory
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Sluder, Loma
1952-01-01
Corrections to slender-wing theory are obtained by assuming a spanwise distribution of loading and determining the chordwise variation which satisfies the appropriate integral equation. Such integral equations are set up in terms of the given vertical induced velocity on the center line or, depending on the type of wing plan form, its average value across the span at a given chord station. The chordwise distribution is then obtained by solving these integral equations. Results are shown for flat-plate rectangular, and triangular wings.
Lightweight, Economical Device Alleviates Drop Foot
NASA Technical Reports Server (NTRS)
Deis, B. C.
1983-01-01
Corrective apparatus alleviates difficulties in walking for victims of drop foot. Elastic line attached to legband provides flexible support to toe of shoe. Device used with flat (heelless) shoes, sneakers, crepe-soled shoes, canvas shoes, and many other types of shoes not usable with short leg brace.
Ultra-Light Precision Membrane Optics
NASA Technical Reports Server (NTRS)
Moore, Jim; Gunter, Kent; Patrick, Brian; Marty, Dave; Bates, Kevin; Gatlin, Romona; Clayton, Bill; Rood, Bob; Brantley, Whitt (Technical Monitor)
2001-01-01
SRS Technologies and NASA Marshall Space Flight Center have conducted a research effort to explore the possibility of developing ultra-lightweight membrane optics for future imaging applications. High precision optical flats and spherical mirrors were produced under this research effort. The thin film mirrors were manufactured using surface replication casting of CPI(Trademark), a polyimide material developed specifically for UV hardness and thermal stability. In the course of this program, numerous polyimide films were cast with surface finishes better than 1.5 nanometers rms and thickness variation of less than 63 nanometers. Precision membrane optical flats were manufactured demonstrating better than 1/13 wave figure error when measured at 633 nanometers. The aerial density of these films is 0.037 kilograms per square meter. Several 0.5-meter spherical mirrors were also manufactured. These mirrors had excellent surface finish (1.5 nanometers rms) and figure error on the order of tens of microns. This places their figure error within the demonstrated correctability of advanced wavefront correction technologies such as real time holography.
One-loop quantum gravity repulsion in the early Universe.
Broda, Bogusław
2011-03-11
Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.
Effect of monitor display on detection of approximal caries lesions in digital radiographs.
Isidor, S; Faaborg-Andersen, M; Hintze, H; Kirkevang, L-L; Frydenberg, M; Haiter-Neto, F; Wenzel, A
2009-12-01
The aim was to compare the accuracy of five flat panel monitors for detection of approximal caries lesions. Five flat panel monitors, Mermaid Ventura (15 inch, colour flat panel, 1024 x 768, 32 bit, analogue), Olórin VistaLine (19 inch, colour, 1280 x 1024, 32 bit, digital), Samsung SyncMaster 203B (20 inch, colour, 1024 x 768, 32 bit, analogue), Totoku ME251i (21 inch, greyscale, 1400 x 1024, 32 bit, digital) and Eizo FlexScan MX190 (19 inch, colour, 1280 x 1024, 32 bit, digital), were assessed. 160 approximal surfaces of human teeth were examined with a storage phosphor plate system (Digora FMX, Soredex) and assessed by seven observers for the presence of caries lesions. Microscopy of the teeth served as validation for the presence/absence of a lesion. The sensitivities varied between observers (range 7-25%) but the variation between the monitors was not large. The Samsung monitor obtained a significantly higher sensitivity than the Mermaid and Olórin monitors (P<0.02) and a lower specificity than the Eizo and Totoku monitors (P<0.05). There were no significant differences between any other monitors. The percentage of correct scores was highest for the Eizo monitor and significantly higher than for the Mermaid and Olórin monitors (P<0.03). There was no clear relationship between the diagnostic accuracy and the resolution or price of the monitor. The Eizo monitor was associated with the overall highest percentage of correct scores. The standard analogue flat panel monitor, Samsung, had higher sensitivity and lower specificity than some of the other monitors, but did not differ in overall accuracy for detection of carious lesions.
NASA Astrophysics Data System (ADS)
Hieber, Simone E.; Khimchenko, Anna; Kelly, Christopher; Mariani, Luigi; Thalmann, Peter; Schulz, Georg; Schmitz, Rüdiger; Greving, Imke; Dominietto, Marco; Müller, Bert
2014-09-01
Hippocampal sclerosis is a common cause of epilepsy, whereby a neuronal cell loss of more than 50% cells is characteristic. If medication fails the best possible treatment is the extraction of the diseased organ. To analyze the microanatomy of the diseased tissue we scanned a human hippocampus extracted from an epilepsy patient. After the identification of degenerated tissue using magnetic resonance imaging the specimen was reduced in size to fit into a cylindrical container with a diameter of 6 mm. Using synchrotron radiation and grating interferometry we acquired micro computed tomography datasets of the specimen. The present study was one of the first successful phase tomography measurements at the imaging beamline P05 (operated by HZG at the PETRA III storage ring, DESY, Hamburg, Germany). Ring and streak artefacts were reduced by enhanced flat-field corrections, combined wavelet-Fourier filters and bilateral filtering. We improved the flat-field correction by the consideration of the correlation between the projections and the flat-field images. In the present study, the correlation that was based on mean squared differences and evaluated on manually determined reference regions leads to the best artefact reduction. A preliminary segmentation of the abnormal tissue reveals that a clinically relevant study requires the development of even more sophisticated artifact reduction tools or a phase contrast measurement of higher quality.
Kroeck, Marina A; Montes, Jaime
2005-02-28
Culture of native flat oysters Ostrea puelchana d'Orbigny in San Antonio Bay (San Matías Gulf, Argentina) began in 1995. After elevated mortality (33%) occurred in September 1996, 18 mo after immersion, histopathological analysis and evaluation of parasitic prevalence was carried out. In October 1997, after 31 mo of cultivation, cumulative mortality was 80%, and in December of the same year, when individuals reached marketable size, mortality was 95% and culture was discontinued. The present study describes the haemocytic parasitism that affected O. puelchana, and suggests that a Bonamia sp. was the etiological agent. This parasite should be considered as a different species from Bonamia sp. detected in Australia and New Zealand until more studies are made to determine the correct taxonomy. This work constitutes the first record of this haemocyte parasite in flat oysters from the Argentinean coast.
Wilmarth, Verl Richard; Healey, D.L.; Clebsch, Alfred; Winograd, I.J.; Zietz, Isadore; Oliver, H.W.
1959-01-01
This report summarizes an interpretation of the geology of Yucca Valley to depths of about 2,300 feet below the surface, the characteristics features of ground water in Yucca and Frenchman Valleys, and the seismic, gravity, and magnetic data for these valleys. Compilation of data, preparation of illustrations, and writing of the report were completed during the period December 26, 1958 to January 10, 1959. Some of the general conclusions must be considered as tentative until more data are available. This work was done by the U.S. Geological Survey on behalf of Albuquerque Operations Office, U.S. Atomic Energy Commission.
NASA Astrophysics Data System (ADS)
Pueyo-Anchuela, Ó.; Casas-Sainz, A. M.; Soriano, M. A.; Pocoví-Juan, A.
Complex geological shallow subsurface environments represent an important handicap in urban and building projects. The geological features of the Central Ebro Basin, with sharp lateral changes in Quaternary deposits, alluvial karst phenomena and anthropic activity can preclude the characterization of future urban areas only from isolated geomechanical tests or from non-correctly dimensioned geophysical techniques. This complexity is here analyzed in two different test fields, (i) one of them linked to flat-bottomed valleys with irregular distribution of Quaternary deposits related to sharp lateral facies changes and irregular preconsolidated substratum position and (ii) a second one with similar complexities in the alluvial deposits and karst activity linked to solution of the underlying evaporite substratum. The results show that different geophysical techniques allow for similar geological models to be obtained in the first case (flat-bottomed valleys), whereas only the application of several geophysical techniques can permit to correctly evaluate the geological model complexities in the second case (alluvial karst). In this second case, the geological and superficial information permit to refine the sensitivity of the applied geophysical techniques to different indicators of karst activity. In both cases 3D models are needed to correctly distinguish alluvial lateral sedimentary changes from superimposed karstic activity.
Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe
2018-05-29
Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James B. Paces; Zell E. Peterman; Kiyoto Futa
2007-08-07
Ground water moving through permeable Paleozoic carbonate rocks represents the most likely pathway for migration of radioactive contaminants from nuclear weapons testing at the Nevada Test Site, Nye County, Nevada. The strontium isotopic composition (87Sr/86Sr) of ground water offers a useful means of testing hydrochemical models of regional flow involving advection and reaction. However, reaction models require knowledge of 87Sr/86Sr data for carbonate rock in the Nevada Test Site vicinity, which is scarce. To fill this data gap, samples of core or cuttings were selected from 22 boreholes at depth intervals from which water samples had been obtained previously aroundmore » the Nevada Test Site at Yucca Flat, Frenchman Flat, Rainier Mesa, and Mercury Valley. Dilute acid leachates of these samples were analyzed for a suite of major- and trace-element concentrations (MgO, CaO, SiO2, Al2O3, MnO, Rb, Sr, Th, and U) as well as for 87Sr/86Sr. Also presented are unpublished analyses of 114 Paleozoic carbonate samples from outcrops, road cuts, or underground sites in the Funeral Mountains, Bare Mountain, Striped Hills, Specter Range, Spring Mountains, and ranges east of the Nevada Test Site measured in the early 1990's. These data originally were collected to evaluate the potential for economic mineral deposition at the potential high-level radioactive waste repository site at Yucca Mountain and adjacent areas (Peterman and others, 1994). Samples were analyzed for a suite of trace elements (Rb, Sr, Zr, Ba, La, and Ce) in bulk-rock powders, and 87Sr/86Sr in partial digestions of carbonate rock using dilute acid or total digestions of silicate-rich rocks. Pre-Tertiary core samples from two boreholes in the central or western part of the Nevada Test Site also were analyzed. Data are presented in tables and summarized in graphs; however, no attempt is made to interpret results with respect to ground-water flow paths in this report. Present-day 87Sr/86Sr values are compared to values for Paleozoic seawater present at the time of deposition. Many of the samples have 87Sr/86Sr compositions that remain relatively unmodified from expected seawater values. However, rocks underlying the northern Nevada Test Site as well as rocks exposed at Bare Mountain commonly have elevated 87Sr/86Sr values derived from post-depositional addition of radiogenic Sr most likely from fluids circulating through rubidium-rich Paleozoic strata or Precambrian basement rocks.« less
Paces, James B.; Peterman, Zell E.; Futo, Kiyoto; Oliver, Thomas A.; Marshall, Brian D.
2007-01-01
Ground water moving through permeable Paleozoic carbonate rocks represents the most likely pathway for migration of radioactive contaminants from nuclear weapons testing at the Nevada Test Site, Nye County, Nevada. The strontium isotopic composition (87Sr/86Sr) of ground water offers a useful means of testing hydrochemical models of regional flow involving advection and reaction. However, reaction models require knowledge of 87Sr/86Sr data for carbonate rock in the Nevada Test Site vicinity, which is scarce. To fill this data gap, samples of core or cuttings were selected from 22 boreholes at depth intervals from which water samples had been obtained previously around the Nevada Test Site at Yucca Flat, Frenchman Flat, Rainier Mesa, and Mercury Valley. Dilute acid leachates of these samples were analyzed for a suite of major- and trace-element concentrations (MgO, CaO, SiO2, Al2O3, MnO, Rb, Sr, Th, and U) as well as for 87Sr/86Sr. Also presented are unpublished analyses of 114 Paleozoic carbonate samples from outcrops, road cuts, or underground sites in the Funeral Mountains, Bare Mountain, Striped Hills, Specter Range, Spring Mountains, and ranges east of the Nevada Test Site measured in the early 1990's. These data originally were collected to evaluate the potential for economic mineral deposition at the potential high-level radioactive waste repository site at Yucca Mountain and adjacent areas (Peterman and others, 1994). Samples were analyzed for a suite of trace elements (Rb, Sr, Zr, Ba, La, and Ce) in bulk-rock powders, and 87Sr/86Sr in partial digestions of carbonate rock using dilute acid or total digestions of silicate-rich rocks. Pre-Tertiary core samples from two boreholes in the central or western part of the Nevada Test Site also were analyzed. Data are presented in tables and summarized in graphs; however, no attempt is made to interpret results with respect to ground-water flow paths in this report. Present-day 87Sr/86Sr values are compared to values for Paleozoic seawater present at the time of deposition. Many of the samples have 87Sr/86Sr compositions that remain relatively unmodified from expected seawater values. However, rocks underlying the northern Nevada Test Site as well as rocks exposed at Bare Mountain commonly have elevated 87Sr/86Sr values derived from post-depositional addition of radiogenic Sr most likely from fluids circulating through rubidium-rich Paleozoic strata or Precambrian basement rocks.
Niedzielski, K; Zwierzchowski, H
1993-01-01
The 3 year study included 469 children with flat feet in preschool and school age from a section of the town of Lodz. In 2 separate age groups the influence of exercises and/or hindfoot supinating inserts on the deformity regression has been assessed. The results were being compared at every stage of the study with the deformity evaluations in control group of not treated children. The best results have been recorded in children doing exercises and wearing inserts--in 50 percent the deformity retreated. Little potential for self correction of this deformity indicates mandatory treatment of all children with flat feet.
NASA Technical Reports Server (NTRS)
Gezari, D.; Lyon, R.; Woodruff, R.; Labeyrie, A.; Oegerle, William (Technical Monitor)
2002-01-01
A concept is presented for a large (10 - 30 meter) sparse aperture hyper telescope to image extrasolar earth-like planets from the ground in the presence of atmospheric seeing. The telescope achieves high dynamic range very close to bright stellar sources with good image quality using pupil densification techniques. Active correction of the perturbed wavefront is simplified by using 36 small flat mirrors arranged in a parabolic steerable array structure, eliminating the need for large delat lines and operating at near-infrared (1 - 3 Micron) wavelengths with flats comparable in size to the seeing cells.
The Partition Function in the Four-Dimensional Schwarz-Type Topological Half-Flat Two-Form Gravity
NASA Astrophysics Data System (ADS)
Abe, Mitsuko
We derive the partition functions of the Schwarz-type four-dimensional topological half-flat two-form gravity model on K3-surface or T4 up to on-shell one-loop corrections. In this model the bosonic moduli spaces describe an equivalent class of a trio of the Einstein-Kähler forms (the hyper-Kähler forms). The integrand of the partition function is represented by the product of some bar ∂ -torsions. bar ∂ -torsion is the extension of R-torsion for the de Rham complex to that for the bar ∂ -complex of a complex analytic manifold.
Clinical implementation of photon beam flatness measurements to verify beam quality.
Goodall, Simon; Harding, Nicholas; Simpson, Jake; Alexander, Louise; Morgan, Steve
2015-11-08
This work describes the replacement of Tissue Phantom Ratio (TPR) measurements with beam profile flatness measurements to determine photon beam quality during routine quality assurance (QA) measurements. To achieve this, a relationship was derived between the existing TPR15/5 energy metric and beam flatness, to provide baseline values and clinically relevant tolerances. The beam quality was varied around two nominal beam energy values for four matched Elekta linear accelerators (linacs) by varying the bending magnet currents and reoptimizing the beam. For each adjusted beam quality the TPR15/5 was measured using an ionization chamber and Solid Water phantom. Two metrics of beam flatness were evaluated using two identical commercial ionization chamber arrays. A linear relationship was found between TPR15/5 and both metrics of flatness, for both nominal energies and on all linacs. Baseline diagonal flatness (FDN) values were measured to be 103.0% (ranging from 102.5% to 103.8%) for 6 MV and 102.7% (ranging from 102.6% to 102.8%) for 10 MV across all four linacs. Clinically acceptable tolerances of ± 2% for 6 MV, and ± 3% for 10 MV, were derived to equate to the current TPR15/5 clinical tolerance of ± 0.5%. Small variations in the baseline diagonal flatness values were observed between ionization chamber arrays; however, the rate of change of TPR15/5 with diagonal flatness was found to remain within experimental uncertainty. Measurements of beam flatness were shown to display an increased sensitivity to variations in the beam quality when compared to TPR measurements. This effect is amplified for higher nominal energy photons. The derivation of clinical baselines and associated tolerances has allowed this method to be incorporated into routine QA, streamlining the process whilst also increasing versatility. In addition, the effect of beam adjustment can be observed in real time, allowing increased practicality during corrective and preventive maintenance interventions.
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Magnetotelluric Data, Central Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Central Yucca Flat, Profile 1, as shown in figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, North Central Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for north central Yucca Flat, Profile 7, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Southern Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Southern Yucca Flat, Profile 4, as shown in Figure 1. No interpretation of the data is included here.« less
Magnetotelluric Data, Northern Yucca Flat, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT sounding data for Profile 2, (fig. 1), located in the northern Yucca Flat area. No interpretation of the data is included here.« less
NASA Astrophysics Data System (ADS)
Han, Yu; Liu, Molin
2018-05-01
In the spatially flat case of loop quantum cosmology, the connection is usually replaced by the holonomy in effective theory. In this paper, instead of the standard scheme, we use a generalised, undetermined function to represent the holonomy and by using the approach of anomaly free constraint algebra we fix all the counter terms in the constraints and find the restriction in the form of , then we derive the gauge-invariant equations of motion of the scalar, tensor and vector perturbations and study the inflationary power spectra with generalised holonomy correction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Duane P.; Hamilton-Brehm, Scott D.; Fisher, Jenny C.
Due to the legacy of Cold War nuclear weapons testing, the Nevada National Security Site (NNSS, formerly known as the Nevada Test Site (NTS)) contains millions of Curies of radioactive contamination. Presented here is a summary of the results of the first comprehensive study of subsurface microbial communities of radioactive and nonradioactive aquifers at this site. To achieve the objectives of this project, cooperative actions between the Desert Research Institute (DRI), the Nevada Field Office of the National Nuclear Security Administration (NNSA), the Underground Test Area Activity (UGTA), and contractors such as Navarro-Interra (NI), were required. Ultimately, fluids from 17more » boreholes and two water-filled tunnels were sampled (sometimes on multiple occasions and from multiple depths) from the NNSS, the adjacent Nevada Test and Training Range (NTTR), and a reference hole in the Amargosa Valley near Death Valley. The sites sampled ranged from highly-radioactive nuclear device test cavities to uncontaminated perched and regional aquifers. Specific areas sampled included recharge, intermediate, and discharge zones of a 100,000-km2 internally-draining province, known as the Death Valley Regional Flow System (DVRFS), which encompasses the entirety of the NNSS/NTTR and surrounding areas. Specific geological features sampled included: West Pahute and Ranier Mesas (recharge zone), Yucca and Frenchman Flats (transitional zone), and the Western edge of the Amargosa Valley near Death Valley (discharge zone). The original overarching question underlying the proposal supporting this work was stated as: Can radiochemically-produced substrates support indigenous microbial communities and subsequently stimulate biocolloid formation that can affect radionuclides in NNSS subsurface nuclear test/detonation sites? Radioactive and non-radioactive groundwater samples were thus characterized for physical parameters, aqueous geochemistry, and microbial communities using both DNA- and cultivation-based tools in an effort to understand the drivers of microbial community structure (including radioactivity) and microbial interactions with select radionuclides and other factors across the range of habitats surveyed.« less
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
NASA Astrophysics Data System (ADS)
Limbacher, J.; Kahn, R. A.
2015-12-01
MISR aerosol optical depth retrievals are fairly robust to small radiometric calibration artifacts, due to the multi-angle observations. However, even small errors in the MISR calibration, especially structured artifacts in the imagery, have a disproportionate effect on the retrieval of aerosol properties from these data. Using MODIS, POLDER-3, AERONET, MAN, and MISR lunar images, we diagnose and correct various calibration and radiometric artifacts found in the MISR radiance (Level 1) data, using empirical image analysis. The calibration artifacts include temporal trends in MISR top-of-atmosphere reflectance at relatively stable desert sites and flat-fielding artifacts detected by comparison to MODIS over bright, low-contrast scenes. The radiometric artifacts include ghosting (as compared to MODIS, POLDER-3, and forward model results) and point-spread function mischaracterization (using the MISR lunar data and MODIS). We minimize the artifacts to the extent possible by parametrically modeling the artifacts and then removing them from the radiance (reflectance) data. Validation is performed using non-training scenes (reflectance comparison), and also by using the MISR Research Aerosol retrieval algorithm results compared to MAN and AERONET.
Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V Krishnan; Nowatzyk, Andreas G.; Koronyo, Yosef; Medina-Kauwe, Lali K.; Gross, Zeev; Gray, Harry B.; Farkas, Daniel L.
2011-01-01
We report fast, non-scanning, wide-field two-photon fluorescence excitation with spectral and lifetime detection for in vivo biomedical applications. We determined the optical characteristics of the technique, developed a Gaussian flat-field correction method to reduce artifacts resulting from non-uniform excitation such that contrast is enhanced, and showed that it can be used for ex vivo and in vivo cellular-level imaging. Two applications were demonstrated: (i) ex vivo measurements of beta-amyloid plaques in retinas of transgenic mice, and (ii) in vivo imaging of sulfonated gallium(III) corroles injected into tumors. We demonstrate that wide-field two photon fluorescence excitation with flat-field correction provides more penetration depth as well as better contrast and axial resolution than the corresponding one-photon wide field excitation for the same dye. Importantly, when this technique is used together with spectral and fluorescence lifetime detection modules, it offers improved discrimination between fluorescence from molecules of interest and autofluorescence, with higher sensitivity and specificity for in vivo applications. PMID:21339880
NASA Astrophysics Data System (ADS)
Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis
2010-01-01
Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
Unsteady Heat-Flux Measurements of Second-Mode Instability Waves in a Hypersonic Boundary Layer
NASA Technical Reports Server (NTRS)
Kergerise, Michael A.; Rufer, Shann J.
2016-01-01
In this paper we report on the application of the atomic layer thermopile (ALTP) heat- flux sensor to the measurement of laminar-to-turbulent transition in a hypersonic flat plate boundary layer. The centerline of the flat-plate model was instrumented with a streamwise array of ALTP sensors and the flat-plate model was exposed to a Mach 6 freestream over a range of unit Reynolds numbers. Here, we observed an unstable band of frequencies that are associated with second-mode instability waves in the laminar boundary layer that forms on the flat-plate surface. The measured frequencies, group velocities, phase speeds, and wavelengths of these instability waves are in agreement with data previously reported in the literature. Heat flux time series, and the Morlet-wavelet transforms of them, revealed the wave-packet nature of the second-mode instability waves. In addition, a laser-based radiative heating system was developed to measure the frequency response functions (FRF) of the ALTP sensors used in the wind tunnel test. These measurements were used to assess the stability of the sensor FRFs over time and to correct spectral estimates for any attenuation caused by the finite sensor bandwidth.
Dosimetric characteristics of fabricated silica fibre for postal radiotherapy dose audits
NASA Astrophysics Data System (ADS)
Fadzil, M. S. Ahmad; Ramli, N. N. H.; Jusoh, M. A.; Kadni, T.; Bradley, D. A.; Ung, N. M.; Suhairul, H.; Mohd Noor, N.
2014-11-01
Present investigation aims to establish the dosimetric characteristics of a novel fabricated flat fibre TLD system for postal radiotherapy dose audits. Various thermoluminescence (TL) properties have been investigated for five sizes of 6 mol% Ge-doped optical fibres. Key dosimetric characteristics including reproducibility, linearity, fading and energy dependence have been established. Irradiations were carried out using a linear accelerator (linac) and a Cobalt-60 machine. For doses from 0.5 Gy up to 10 Gy, Ge-doped flat fibres exhibit linearity between TL yield and dose, reproducible to better than 8% standard deviation (SD) following repeat measurements (n = 3). For photons generated at potentials from 1.25 MeV to 10 MV an energy-dependent response is noted, with a coefficient of variation (CV) of less than 40% over the range of energies investigated. For 6.0 mm length flat fibres 100 μm thick × 350 pm wide, the TL fading loss following 30 days of storage at room temperature was < 8%. The Ge-doped flat fibre system represents a viable basis for use in postal radiotherapy dose audits, corrections being made for the various factors influencing the TL yield.
Optomechanical design and testing of the VLT tertiary mirrors
NASA Astrophysics Data System (ADS)
Bollinger, Wolfgang; Juranek, Hans J.; Schulte, Stefan; May, K.; Michel, Alain
2000-07-01
The Tertiary Mirrors for the ESO Very Large Telescope project consist of four optical flats (elliptical, 890 X 1260 mm2). The achieved opto-mechanical design is challenging since it provides high optical overall quality combined with high stiffness (70 Hz Eigenfrequency) and low mass (total mass of 180 kg for the complete unit). Schott (Mainz, Germany) produces the lightweight Zerodur blanks. Carl Zeiss has designed and manufactured the mirror and its support cell. Last not least it became necessary to install the biggest testing equipment for flats in Europe to guarantee for a scientifically correct verification of the quality of the complete unit. All four mirrors have been delivered to ESO.
Does the Hertz solution estimate pressures correctly in diamond indentor experiments?
NASA Astrophysics Data System (ADS)
Bruno, M. S.; Dunn, K. J.
1986-05-01
The Hertz solution has been widely used to estimate pressures in a spherical indentor against flat matrix type high pressure experiments. It is usually assumed that the pressure generated when compressing a sample between the indentor and substrate is the same as that generated when compressing an indentor against a flat surface with no sample present. A non-linear finite element analysis of this problem has shown that the situation is far more complex. The actual peak pressure in the sample is highly dependent on plastic deformation and the change in material properties due to hydrostatic pressure. An analysis with two material models is presented and compared with the Hertz solution.
A useful approximation for the flat surface impulse response
NASA Technical Reports Server (NTRS)
Brown, Gary S.
1989-01-01
The flat surface impulse response (FSIR) is a very useful quantity in computing the mean return power for near-nadir-oriented short-pulse radar altimeters. However, for very small antenna beamwidths and relatively large pointing angles, previous analytical descriptions become very difficult to compute accurately. An asymptotic approximation is developed to overcome these computational problems. Since accuracy is of key importance, a condition is developed under which this solution is within 2 percent of the exact answer. The asymptotic solution is shown to be in functional agreement with a conventional clutter power result and gives a 1.25-dB correction to this formula to account properly for the antenna-pattern variation over the illuminated area.
Geometric Corrections for Topographic Distortion from Side Scan Sonar Data Obtained by ANKOU System
NASA Astrophysics Data System (ADS)
Yamamoto, Fujio; Kato, Yukihiro; Ogasawara, Shohei
The ANKOU is a newly developed, full ocean depth, long-range vector side scan sonar system. The system provides real time vector side scan sonar data to produce backscattering images and bathymetric maps for seafloor swaths up to 10 km on either side of ship's centerline. Complete geometric corrections are made using towfish attitude and cross-track distortions known as foreshortening and layover caused by violation of the flat bottom assumption. Foreshortening and layover refers to pixels which have been placed at an incorrect cross-track distance. Our correction of this topographic distortion is accomplished by interpolating a bathymetric profile and ANKOU phase data. We applied these processing techniques to ANKOU backscattering data obtained from off Boso Peninsula, and confirmed their efficiency and utility for making geometric corrections of side scan sonar data.
[Astigmatism correction with Excimer laser].
Gauthier, L
2012-03-01
Excimer laser is the best and the more used technique for Astigmatism correction. Lasik is generally preferred to PRK and must be the choice for hyperopic and mix astigmatisms. Myopic astigmatisms are the easier cases to treat: the length of the photoablation is placed on the flat meridian. Hyperopic and mix astigmatisms are more difficult to correct because they are more technically demanding and because the optical zone of the photoablation must be large. Flying spots lasers are the best for these cases. The most important point is to trace the photoablation very precisely on the astigmatism axis. The use of eye trackers with iris recognition or a preoperative marking of the reference axis avoid cyclotorsion or a wrong position of the head. Irregular astigmatism are better corrected with topoguided or wavefront guided photoablations. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Aging of imaging properties of a CMOS flat-panel detector for dental cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Kim, D. W.; Han, J. C.; Yun, S.; Kim, H. K.
2017-01-01
We have experimentally investigated the long-term stability of imaging properties of a flat-panel detector in conditions used for dental x-ray imaging. The detector consists of a CsI:Tl layer and CMOS photodiode pixel arrays. Aging simulations were carried out using an 80-kVp x-ray beam at an air-kerma rate of approximately 5 mGy s-1 at the entrance surface of the detector with a total air kerma of up to 0.6 kGy. Dark and flood-field images were periodically obtained during irradiation, and the mean signal and noise levels were evaluated for each image. We also evaluated the modulation-transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). The aging simulation showed a decrease in both the signal and noise of the gain-offset-corrected images, but there was negligible change in the signal-to-noise performance as a function of the accumulated dose. The gain-offset correction for analyzing images resulted in negligible changes in MTF, NPS, and DQE results over the total dose. Continuous x-ray exposure to a detector can cause degradation in the physical performance factors such the detector sensitivity, but linear analysis of the gain-offset-corrected images can assure integrity of the imaging properties of a detector during its lifetime.
NASA Astrophysics Data System (ADS)
Tansella, Vittorio; Bonvin, Camille; Durrer, Ruth; Ghosh, Basundhara; Sellentin, Elena
2018-03-01
We derive an exact expression for the correlation function in redshift shells including all the relativistic contributions. This expression, which does not rely on the distant-observer or flat-sky approximation, is valid at all scales and includes both local relativistic corrections and integrated contributions, like gravitational lensing. We present two methods to calculate this correlation function, one which makes use of the angular power spectrum Cl(z1,z2) and a second method which evades the costly calculations of the angular power spectra. The correlation function is then used to define the power spectrum as its Fourier transform. In this work theoretical aspects of this procedure are presented, together with quantitative examples. In particular, we show that gravitational lensing modifies the multipoles of the correlation function and of the power spectrum by a few percent at redshift z=1 and by up to 30% and more at z=2. We also point out that large-scale relativistic effects and wide-angle corrections generate contributions of the same order of magnitude and have consequently to be treated in conjunction. These corrections are particularly important at small redshift, z=0.1, where they can reach 10%. This means in particular that a flat-sky treatment of relativistic effects, using for example the power spectrum, is not consistent.
NASA Astrophysics Data System (ADS)
Tanaka, M.; Katsuya, Y.; Matsushita, Y.
2013-03-01
The focused-beam flat-sample method (FFM), which is a method for high-resolution and rapid synchrotron X-ray powder diffraction measurements by combination of beam focusing optics, a flat shape sample and an area detector, was applied for diffraction experiments with anomalous scattering effect. The advantages of FFM for anomalous diffraction were absorption correction without approximation, rapid data collection by an area detector and good signal-to-noise ratio data by focusing optics. In the X-ray diffraction experiments of CoFe2O4 and Fe3O4 (By FFM) using X-rays near the Fe K absorption edge, the anomalous scattering effect between Fe/Co or Fe2+/Fe3+ can be clearly detected, due to the change of diffraction intensity. The change of observed diffraction intensity as the incident X-ray energy was consistent with the calculation. The FFM is expected to be a method for anomalous powder diffraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Masahiko, E-mail: masahiko@spring8.or.jp; Katsuya, Yoshio, E-mail: katsuya@spring8.or.jp; Sakata, Osami, E-mail: SAKATA.Osami@nims.go.jp
2016-07-27
Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe{sub 2}O{sub 4} (inverse spinel structure) using X-rays near Fe K absorptionmore » edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe{sub 2}O{sub 4} crystal structure.« less
Dynamic metasurface lens based on MEMS technology
NASA Astrophysics Data System (ADS)
Roy, Tapashree; Zhang, Shuyan; Jung, Il Woong; Troccoli, Mariano; Capasso, Federico; Lopez, Daniel
2018-02-01
In the recent years, metasurfaces, being flat and lightweight, have been designed to replace bulky optical components with various functions. We demonstrate a monolithic Micro-Electro-Mechanical System (MEMS) integrated with a metasurface-based flat lens that focuses light in the mid-infrared spectrum. A two-dimensional scanning MEMS platform controls the angle of the lens along two orthogonal axes by ±9°, thus enabling dynamic beam steering. The device could be used to compensate for off-axis incident light and thus correct for aberrations such as coma. We show that for low angular displacements, the integrated lens-on-MEMS system does not affect the mechanical performance of the MEMS actuators and preserves the focused beam profile as well as the measured full width at half maximum. We envision a new class of flat optical devices with active control provided by the combination of metasurfaces and MEMS for a wide range of applications, such as miniaturized MEMS-based microscope systems, LIDAR scanners, and projection systems.
Sawicki, R.H.; Sweatt, W.
1985-11-21
A technique for adjustably correcting for astigmatism in a light beam is disclosed herein. This technique defines a flat, rectangular light reflecting surface having opposite reinforced side edges and which is resiliently bendable, to a limited extent, into different concave and/or convex cylindrical curvatures about a particular axis and provides for adjustably bending the light reflecting surface into one of different curvatures depending upon the astigmatism to be corrected and for fixedly maintaining the curvature selected. In the embodiment disclosed, the light reflecting surface is adjustably bendable into the selected cylindrical curvature by application of a particular bending moment to the reinforced side edges of the light reflecting surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawicki, R.H.; Sweatt, W.
1985-11-21
A technique for adjustably correcting for astigmatism in a light beam is disclosed herein. This technique defines a flat, rectangular light reflecting surface having opposite reinforced side edges and which is resiliently bendable, to a limited extent, into different concave and/or convex cylindrical curvatures about a particular axis and provides for adjustably bending the light reflecting surface into one of different curvatures depending upon the astigmatism to be corrected and for fixedly maintaining the curvature selected. In the embodiment disclosed, the light reflecting surface is adjustably bendable into the selected cylindrical curvature by application of a particular bending moment tomore » the reinforced side edges of the light reflecting surface.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... First-Class Mail[supreg], Standard Mail[supreg], and Bound Printed Matter (BPM) pieces that are eligible...-Class Mail, Standard Mail, or BPM prices. This change will coincide with the current Move Update... First-Class Mail, Standard Mail, and BPM full-service pieces. Prices for notices provided after this...
USDA-ARS?s Scientific Manuscript database
This study was designed to determine if the present USDA ARS Spray Nozzle models based on water plus non-ionic surfactant spray solutions could be used to estimate spray droplet size data for different spray formulations through use of experimentally determined correction factors or if full spray fo...
Conformal invariance of (0, 2) sigma models on Calabi-Yau manifolds
NASA Astrophysics Data System (ADS)
Jardine, Ian T.; Quigley, Callum
2018-03-01
Long ago, Nemeschansky and Sen demonstrated that the Ricci-flat metric on a Calabi-Yau manifold could be corrected, order by order in perturbation theory, to produce a conformally invariant (2, 2) nonlinear sigma model. Here we extend this result to (0, 2) sigma models for stable holomorphic vector bundles over Calabi-Yaus.
NASA Astrophysics Data System (ADS)
Mishra, Hiranmaya; Mohanty, Subhendra; Nautiyal, Akhilesh
2012-04-01
In warm inflation models there is the requirement of generating large dissipative couplings of the inflaton with radiation, while at the same time, not de-stabilising the flatness of the inflaton potential due to radiative corrections. One way to achieve this without fine tuning unrelated couplings is by supersymmetry. In this Letter we show that if the inflaton and other light fields are pseudo-Nambu-Goldstone bosons then the radiative corrections to the potential are suppressed and the thermal corrections are small as long as the temperature is below the symmetry breaking scale. In such models it is possible to fulfil the contrary requirements of an inflaton potential which is stable under radiative corrections and the generation of a large dissipative coupling of the inflaton field with other light fields. We construct a warm inflation model which gives the observed CMB-anisotropy amplitude and spectral index where the symmetry breaking is at the GUT scale.
Ultra-low roughness magneto-rheological finishing for EUV mask substrates
NASA Astrophysics Data System (ADS)
Dumas, Paul; Jenkins, Richard; McFee, Chuck; Kadaksham, Arun J.; Balachandran, Dave K.; Teki, Ranganath
2013-09-01
EUV mask substrates, made of titania-doped fused silica, ideally require sub-Angstrom surface roughness, sub-30 nm flatness, and no bumps/pits larger than 1 nm in height/depth. To achieve the above specifications, substrates must undergo iterative global and local polishing processes. Magnetorheological finishing (MRF) is a local polishing technique which can accurately and deterministically correct substrate figure, but typically results in a higher surface roughness than the current requirements for EUV substrates. We describe a new super-fine MRF® polishing fluid whichis able to meet both flatness and roughness specifications for EUV mask blanks. This eases the burden on the subsequent global polishing process by decreasing the polishing time, and hence the defectivity and extent of figure distortion.
Seismic migration in generalized coordinates
NASA Astrophysics Data System (ADS)
Arias, C.; Duque, L. F.
2017-06-01
Reverse time migration (RTM) is a technique widely used nowadays to obtain images of the earth’s sub-surface, using artificially produced seismic waves. This technique has been developed for zones with flat surface and when applied to zones with rugged topography some corrections must be introduced in order to adapt it. This can produce defects in the final image called artifacts. We introduce a simple mathematical map that transforms a scenario with rugged topography into a flat one. The three steps of the RTM can be applied in a way similar to the conventional ones just by changing the Laplacian in the acoustic wave equation for a generalized one. We present a test of this technique using the Canadian foothills SEG velocity model.
Bypass Transitional Flow Calculations Using a Navier-Stokes Solver and Two-Equation Models
NASA Technical Reports Server (NTRS)
Liuo, William W.; Shih, Tsan-Hsing; Povinelli, L. A. (Technical Monitor)
2000-01-01
Bypass transitional flows over a flat plate were simulated using a Navier-Stokes solver and two equation models. A new model for the bypass transition, which occurs in cases with high free stream turbulence intensity (TI), is described. The new transition model is developed by including an intermittency correction function to an existing two-equation turbulence model. The advantages of using Navier-Stokes equations, as opposed to boundary-layer equations, in bypass transition simulations are also illustrated. The results for two test flows over a flat plate with different levels of free stream turbulence intensity are reported. Comparisons with the experimental measurements show that the new model can capture very well both the onset and the length of bypass transition.
NASA Astrophysics Data System (ADS)
Freudling, W.; Møller, P.; Patat, F.; Moehler, S.; Romaniello, M.; Jehin, E.; O'Brien, K.; Izzo, C.; Pompei, E.
Photometric calibration observations are routinely carried out with all ESO imaging cameras in every clear night. The nightly zeropoints derived from these observations are accurate to about 10%. Recently, we have started the FORS Absolute Photometry Project (FAP) to investigate, if and how percent-level absolute photometric accuracy can be achieved with FORS1, and how such photometric calibration can be offered to observers. We found that there are significant differences between the sky-flats and the true photometric response of the instrument which partially depend on the rotator angle. A second order correction to the sky-flat significantly improves the relative photometry within the field. We demonstrate the feasibility of percent level photometry and describe the calibrations necessary to achieve that level of accuracy.
Internal (Annular) and Compressible External (Flat Plate) Turbulent Flow Heat Transfer Correlations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechant, Lawrence; Smith, Justin
Here we provide a discussion regarding the applicability of a family of traditional heat transfer correlation based models for several (unit level) heat transfer problems associated with flight heat transfer estimates and internal flow heat transfer associated with an experimental simulation design (Dobranich 2014). Variability between semi-empirical free-flight models suggests relative differences for heat transfer coefficients on the order of 10%, while the internal annular flow behavior is larger with differences on the order of 20%. We emphasize that these expressions are strictly valid only for the geometries they have been derived for e.g. the fully developed annular flow ormore » simple external flow problems. Though, the application of flat plate skin friction estimate to cylindrical bodies is a traditional procedure to estimate skin friction and heat transfer, an over-prediction bias is often observed using these approximations for missile type bodies. As a correction for this over-estimate trend, we discuss a simple scaling reduction factor for flat plate turbulent skin friction and heat transfer solutions (correlations) applied to blunt bodies of revolution at zero angle of attack. The method estimates the ratio between axisymmetric and 2-d stagnation point heat transfer skin friction and Stanton number solution expressions for sub-turbulent Reynolds numbers %3C1x10 4 . This factor is assumed to also directly influence the flat plate results applied to the cylindrical portion of the flow and the flat plate correlations are modified by« less
A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors
Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca
2012-01-01
Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642
NASA Astrophysics Data System (ADS)
Choi, Jang-Hwan; Muller, Kerstin; Hsieh, Scott; Maier, Andreas; Gold, Garry; Levenston, Marc; Fahrig, Rebecca
2016-03-01
C-arm-based cone-beam CT (CBCT) systems with flat-panel detectors are suitable for diagnostic knee imaging due to their potentially flexible selection of CT trajectories and wide volumetric beam coverage. In knee CT imaging, over-exposure artifacts can occur because of limitations in the dynamic range of the flat panel detectors present on most CBCT systems. We developed a straightforward but effective method for correction and detection of over-exposure for an Automatic Exposure Control (AEC)-enabled standard knee scan incorporating a prior low dose scan. The radiation dose associated with the low dose scan was negligible (0.0042mSv, 2.8% increase) which was enabled by partially sampling the projection images considering the geometry of the knees and lowering the dose further to be able to just see the skin-air interface. We combined the line integrals from the AEC and low dose scans after detecting over-exposed regions by comparing the line profiles of the two scans detector row-wise. The combined line integrals were reconstructed into a volumetric image using filtered back projection. We evaluated our method using in vivo human subject knee data. The proposed method effectively corrected and detected over-exposure, and thus recovered the visibility of exterior tissues (e.g., the shape and density of the patella, and the patellar tendon), incorporating a prior low dose scan with a negligible increase in radiation exposure.
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Diskin, B.; Brandt, A.
1999-01-01
The distributed-relaxation multigrid and defect- correction methods are applied to the two- dimensional compressible Navier-Stokes equations. The formulation is intended for high Reynolds number applications and several applications are made at a laminar Reynolds number of 10,000. A staggered- grid arrangement of variables is used; the coupled pressure and internal energy equations are solved together with multigrid, requiring a block 2x2 matrix solution. Textbook multigrid efficiencies are attained for incompressible and slightly compressible simulations of the boundary layer on a flat plate. Textbook efficiencies are obtained for compressible simulations up to Mach numbers of 0.7 for a viscous wake simulation.
NASA Astrophysics Data System (ADS)
Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk
2010-06-01
In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.
Kory Westlund, Jacqueline M; Jeong, Sooyeon; Park, Hae W; Ronfard, Samuel; Adhikari, Aradhana; Harris, Paul L; DeSteno, David; Breazeal, Cynthia L
2017-01-01
Prior research with preschool children has established that dialogic or active book reading is an effective method for expanding young children's vocabulary. In this exploratory study, we asked whether similar benefits are observed when a robot engages in dialogic reading with preschoolers. Given the established effectiveness of active reading, we also asked whether this effectiveness was critically dependent on the expressive characteristics of the robot. For approximately half the children, the robot's active reading was expressive; the robot's voice included a wide range of intonation and emotion ( Expressive ). For the remaining children, the robot read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range ( Flat ). The robot's movements were kept constant across conditions. We performed a verification study using Amazon Mechanical Turk (AMT) to confirm that the Expressive robot was viewed as significantly more expressive, more emotional, and less passive than the Flat robot. We invited 45 preschoolers with an average age of 5 years who were either English Language Learners (ELL), bilingual, or native English speakers to engage in the reading task with the robot. The robot narrated a story from a picture book, using active reading techniques and including a set of target vocabulary words in the narration. Children were post-tested on the vocabulary words and were also asked to retell the story to a puppet. A subset of 34 children performed a second story retelling 4-6 weeks later. Children reported liking and learning from the robot a similar amount in the Expressive and Flat conditions. However, as compared to children in the Flat condition, children in the Expressive condition were more concentrated and engaged as indexed by their facial expressions; they emulated the robot's story more in their story retells; and they told longer stories during their delayed retelling. Furthermore, children who responded to the robot's active reading questions were more likely to correctly identify the target vocabulary words in the Expressive condition than in the Flat condition. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot.
Exploring the Brighter-fatter Effect with the Hyper Suprime-Cam
NASA Astrophysics Data System (ADS)
Coulton, William R.; Armstrong, Robert; Smith, Kendrick M.; Lupton, Robert H.; Spergel, David N.
2018-06-01
The brighter-fatter effect has been postulated to arise due to the build up of a transverse electric field, produced as photocharges accumulate in the pixels’ potential wells. We investigate the brighter-fatter effect in the Hyper Suprime-Cam by examining flat fields and moments of stars. We observe deviations from the expected linear relation in the photon transfer curve (PTC), luminosity-dependent correlations between pixels in flat-field images, and a luminosity-dependent point-spread function (PSF) in stellar observations. Under the key assumptions of translation invariance and Maxwell’s equations in the quasi-static limit, we give a first-principles proof that the effect can be parameterized by a translationally invariant scalar kernel. We describe how this kernel can be estimated from flat fields and discuss how this kernel has been used to remove the brighter-fatter distortions in Hyper Suprime-Cam images. We find that our correction restores the expected linear relation in the PTCs and significantly reduces, but does not completely remove, the luminosity dependence of the PSF over a wide range of magnitudes.
Management of the flexible flat foot in the child: a focus on the use of osteotomies for correction.
Kwon, John Y; Myerson, Mark S
2010-06-01
Pes planus, commonly referred as flat foot, is a combination of foot and ankle deformities. When faced with this deformity in children, the treating surgeon should use a systematic method for evaluation to distinguish normal variation from true pathology, as well as conditions that have a benign natural history versus those that may lead to significant disability if left untreated. Certain deformities will inevitably worsen and therefore require surgery. Common sense clearly supports the indication for a simple procedure, such as an arthroereisis or an osteotomy, performed in the young child as opposed to an arthrodesis in older adolescence or adulthood as the foot becomes more rigid. Such approaches and other issues are discussed in this article. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Maslenikov, I.; Useinov, A.; Birykov, A.; Reshetov, V.
2017-10-01
The instrumented indentation method requires the sample surface to be flat and smooth; thus, hardness and elastic modulus values are affected by the roughness. A model that accounts for the isotropic surface roughness and can be used to correct the data in two limiting cases is proposed. Suggested approach requires the surface roughness parameters to be known.
X-Ray Phase Imaging for Breast Cancer Detection
2012-09-01
the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging
NASA Astrophysics Data System (ADS)
Jimenez, H.; Dumas, P.; Ponton, D.; Ferraris, J.
2012-03-01
Invertebrates represent an essential component of coral reef ecosystems; they are ecologically important and a major resource, but their assemblages remain largely unknown, particularly on Pacific islands. Understanding their distribution and building predictive models of community composition as a function of environmental variables therefore constitutes a key issue for resource management. The goal of this study was to define and classify the main environmental factors influencing tropical invertebrate distributions in New Caledonian reef flats and to test the resulting predictive model. Invertebrate assemblages were sampled by visual counting during 2 years and 2 seasons, then coupled to different environmental conditions (habitat composition, hydrodynamics and sediment characteristics) and harvesting status (MPA vs. non-MPA and islets vs. coastal flats). Environmental conditions were described by a principal component analysis (PCA), and contributing variables were selected. Permutational analysis of variance (PERMANOVA) was used to test the effects of different factors (status, flat, year and season) on the invertebrate assemblage composition. Multivariate regression trees (MRT) were then used to hierarchically classify the effects of environmental and harvesting variables. MRT model explained at least 60% of the variation in structure of invertebrate communities. Results highlighted the influence of status (MPA vs. non-MPA) and location (islet vs. coastal flat), followed by habitat composition, organic matter content, hydrodynamics and sampling year. Predicted assemblages defined by indicator families were very different for each environment-exploitation scenario and correctly matched a calibration data matrix. Predictions from MRT including both environmental variables and harvesting pressure can be useful for management of invertebrates in coral reef environments.
Gordon, H R; Wang, M
1992-07-20
The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.
Artifact reduction of different metallic implants in flat detector C-arm CT.
Hung, S-C; Wu, C-C; Lin, C-J; Guo, W-Y; Luo, C-B; Chang, F-C; Chang, C-Y
2014-07-01
Flat detector CT has been increasingly used as a follow-up examination after endovascular intervention. Metal artifact reduction has been successfully demonstrated in coil mass cases, but only in a small series. We attempted to objectively and subjectively evaluate the feasibility of metal artifact reduction with various metallic objects and coil lengths. We retrospectively reprocessed the flat detector CT data of 28 patients (15 men, 13 women; mean age, 55.6 years) after they underwent endovascular treatment (20 coiling ± stent placement, 6 liquid embolizers) or shunt drainage (n = 2) between January 2009 and November 2011 by using a metal artifact reduction correction algorithm. We measured CT value ranges and noise by using region-of-interest methods, and 2 experienced neuroradiologists rated the degrees of improved imaging quality and artifact reduction by comparing uncorrected and corrected images. After we applied the metal artifact reduction algorithm, the CT value ranges and the noise were substantially reduced (1815.3 ± 793.7 versus 231.7 ± 95.9 and 319.9 ± 136.6 versus 45.9 ± 14.0; both P < .001) regardless of the types of metallic objects and various sizes of coil masses. The rater study achieved an overall improvement of imaging quality and artifact reduction (85.7% and 78.6% of cases by 2 raters, respectively), with the greatest improvement in the coiling group, moderate improvement in the liquid embolizers, and the smallest improvement in ventricular shunting (overall agreement, 0.857). The metal artifact reduction algorithm substantially reduced artifacts and improved the objective image quality in every studied case. It also allowed improved diagnostic confidence in most cases. © 2014 by American Journal of Neuroradiology.
Sawicki, Richard H.; Sweatt, William
1987-01-01
A technique for adjustably correcting for astigmatism in a light beam is disclosed herein. This technique utilizes first means which defines a flat, rectangular light reflecting surface having opposite reinforced side edges and which is resiliently bendable, to a limited extent, into different concave and/or convex cylindrical curvatures about a particular axis and second means acting on the first means for adjustably bending the light reflecting surface into a particular selected one of the different curvatures depending upon the astigmatism to be corrected for and for fixedly maintaining the curvature selected. In the embodiment disclosed, the light reflecting surface is adjustably bendable into the selected cylindrical curvature by application of a particular bending moment to the reinforced side edges of the light reflecting surface.
Reflectance calibration and shadow effect of VNIS spectra acquired by the Yutu rover
NASA Astrophysics Data System (ADS)
Hu, Sen; Lin, Yang-Ting; Liu, Bin; Yang, Wei; He, Zhi-Ping; Xing, Wei-Fan
2015-09-01
Yutu is the first lunar rover after the Apollo program and Luna missions. One of the payloads on the Yutu rover, the Visible and Near-infrared Imaging Spectrometer (VNIS), has acquired four VIS/NIR images and SWIR spectra near its landing site in Mare Imbrium. The radiance images were reduced through repairing bad lines and bad points, and applying flat field correction, and then were converted into reflectance values based on the solar irradiance and angles of incidence. A significant shadow effect was observed in the VIS/NIR image. The shadowed regions show lower reflectance with a darkening trend compared with illuminated regions. The reflectance increased by up to 24% for entire images and 17% for the VIS/NIR-SWIR overlapping regions after shadow correction. The correction for the shadow effect will remarkably decrease the estimate of FeO content, by up to 4.9 wt.% in this study. The derived FeO contents of CD-005∼008 after shadow correction are around 18.0 wt.%.
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marutzky, Sam J.; Andrews, Robert
The peer review team commends the Navarro-Intera, LLC (N-I), team for its efforts in using limited data to model the fate of radionuclides in groundwater at Yucca Flat. Recognizing the key uncertainties and related recommendations discussed in Section 6.0 of this report, the peer review team has concluded that U.S. Department of Energy (DOE) is ready for a transition to model evaluation studies in the corrective action decision document (CADD)/corrective action plan (CAP) stage. The DOE, National Nuclear Security Administration Nevada Field Office (NNSA/NFO) clarified the charge to the peer review team in a letter dated October 9, 2014, frommore » Bill R. Wilborn, NNSA/NFO Underground Test Area (UGTA) Activity Lead, to Sam J. Marutzky, N-I UGTA Project Manager: “The model and supporting information should be sufficiently complete that the key uncertainties can be adequately identified such that they can be addressed by appropriate model evaluation studies. The model evaluation studies may include data collection and model refinements conducted during the CADD/CAP stage. One major input to identifying ‘key uncertainties’ is the detailed peer review provided by independent qualified peers.” The key uncertainties that the peer review team recognized and potential concerns associated with each are outlined in Section 6.0, along with recommendations corresponding to each uncertainty. The uncertainties, concerns, and recommendations are summarized in Table ES-1. The number associated with each concern refers to the section in this report where the concern is discussed in detail.« less
NASA Astrophysics Data System (ADS)
Kurzweil, P.
In 1860, the Frenchman Gaston Planté (1834-1889) invented the first practical version of a rechargeable battery based on lead-acid chemistry-the most successful secondary battery of all ages. This article outlines Planté's fundamental concepts that were decisive for later development of practical lead-acid batteries. The 'pile secondaire' was indeed ahead its time in that an appropriate appliance for charging the accumulator was not available. The industrial success came after the invention of the Gramme machine. In 1879, Planté obtained acceptance for his work by publishing a book entitled Recherches sur l' Electricité. He never protected his inventions by patents, and spent much of his fortune on assisting impoverished scientists.
Steelhead Supplementation Studies; Steelhead Supplementation in Idaho Rivers, Annual Report 2002.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Alan
The Steelhead Supplementation Study (SSS) has two broad objectives: (1) investigate the feasibility of supplementing depressed wild and natural steelhead populations using hatchery populations, and (2) describe the basic life history and genetic characteristics of wild and natural steelhead populations in the Salmon and Clearwater Basins. Idaho Department of Fish and Game (IDFG) personnel stocked adult steelhead from Sawtooth Fish Hatchery into Frenchman and Beaver creeks and estimated the number of age-1 parr produced from the outplants since 1993. On May 2, 2002, both Beaver and Frenchman creeks were stocked with hatchery adult steelhead. A SSS crew snorkeled the creeksmore » in August 2002 to estimate the abundance of age-1 parr from brood year (BY) 2001. I estimated that the yield of age-1 parr per female stocked in 2001 was 7.3 and 6.7 in Beaver and Frenchman creeks, respectively. SSS crews stocked Dworshak hatchery stock fingerlings and smolts from 1993 to 1999 in the Red River drainage to assess which life stage produces more progeny when the adults return to spawn. In 2002, Clearwater Fish Hatchery personnel operated the Red River weir to trap adults that returned from these stockings. Twelve PIT-tagged adults from the smolt releases and one PIT-tagged adult from fingerling releases were detected during their migration up the mainstem Columbia and Snake rivers, but none from either group were caught at the weir. The primary focus of the study has been monitoring and collecting life history information from wild steelhead populations. An adult weir has been operated annually since 1992 in Fish Creek, a tributary of the Lochsa River. The weir was damaged by a rain-on-snow event in April 2002 and although the weir remained intact, some adults were able to swim undetected through the weir. Despite damage to the weir, trap tenders captured 167 adult steelhead, the most fish since 1993. The maximum likelihood estimate of adult steelhead escapement was 242. A screw trap has been operated annually in Fish Creek since 1994 to estimate the number of emigrating parr and smolts. I estimated that 18,687 juvenile steelhead emigrated from Fish Creek in 2002, the lowest number of migrants since 1998. SSS crews snorkeled three streams in the Selway River drainage and 10 streams in the Lochsa River drainage to estimate juvenile steelhead densities. The densities of age-1 steelhead parr declined in all streams compared to the densities observed in 2001. The age-1 densities in Fish Creek and Gedney Creek were the lowest observed since this project began monitoring those populations in 1994. The SSS crews and other cooperators tagged more than 12,000 juvenile steelhead with passive integrated transponder (PIT) tags in 2002. In 2002, technicians mounted and aged steelhead scales that were collected from 1998 to 2001. A consensus was reached among technicians for age of steelhead juveniles from Fish Creek. Scales that were collected in other streams were aged by at least one reader; however, before a final age is assigned to these fish, the age needs to be verified by another reader and any age differences among readers resolved. Dr. Jennifer Nielsen, at the U.S. Geological Survey Alaska Biological Science Center, Anchorage continued the microsatellite analysis of the steelhead tissue samples that were collected from Idaho streams in 2000. Two thousand eighteen samples from 40 populations were analyzed. The analysis of the remaining 39 populations is continuing.« less
OMV: A simplified mathematical model of the orbital maneuvering vehicle
NASA Technical Reports Server (NTRS)
Teoh, W.
1984-01-01
A model of the orbital maneuvering vehicle (OMV) is presented which contains several simplications. A set of hand controller signals may be used to control the motion of the OMV. Model verification is carried out using a sequence of tests. The dynamic variables generated by the model are compared, whenever possible, with the corresponding analytical variables. The results of the tests show conclusively that the present model is behaving correctly. Further, this model interfaces properly with the state vector transformation module (SVX) developed previously. Correct command sentence sequences are generated by the OMV and and SVX system, and these command sequences can be used to drive the flat floor simulation system at MSFC.
On the accuracy of Whitham's method. [for steady ideal gas flow past cones
NASA Technical Reports Server (NTRS)
Zahalak, G. I.; Myers, M. K.
1974-01-01
The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.
Electron-beam lithography data preparation based on multithreading MGS/PROXECCO
NASA Astrophysics Data System (ADS)
Eichhorn, Hans; Lemke, Melchior; Gramss, Juergen; Buerger, B.; Baetz, Uwe; Belic, Nikola; Eisenmann, Hans
2001-04-01
This paper will highlight an enhanced MGS layout data post processor and the results of its industrial application. Besides the preparation of hierarchical GDS layout data, the processing of flat data has been drastically accelerated. The application of the Proximity Correction in conjunction with the OEM version of the PROXECCO was crowned with success for data preparation of mask sets featuring 0.25 micrometers /0.18 micrometers integration levels.
The Aquarius Salinity Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David
2012-01-01
The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-06-27
This Closure Report (CR) presents information supporting closure of Corrective Action Unit (CAU) 104, Area 7 Yucca Flat Atmospheric Test Sites, and provides documentation supporting the completed corrective actions and confirmation that closure objectives for CAU 104 were met. This CR complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; the U.S. Department of Energy (DOE), Environmental Management; the U.S. Department of Defense; and DOE, Legacy Management. CAU 104 consists of the following 15 Corrective Action Sites (CASs), located in Area 7 of the Nevada National Securitymore » Site: · CAS 07-23-03, Atmospheric Test Site T-7C · CAS 07-23-04, Atmospheric Test Site T7-1 · CAS 07-23-05, Atmospheric Test Site · CAS 07-23-06, Atmospheric Test Site T7-5a · CAS 07-23-07, Atmospheric Test Site - Dog (T-S) · CAS 07-23-08, Atmospheric Test Site - Baker (T-S) · CAS 07-23-09, Atmospheric Test Site - Charlie (T-S) · CAS 07-23-10, Atmospheric Test Site - Dixie · CAS 07-23-11, Atmospheric Test Site - Dixie · CAS 07-23-12, Atmospheric Test Site - Charlie (Bus) · CAS 07-23-13, Atmospheric Test Site - Baker (Buster) · CAS 07-23-14, Atmospheric Test Site - Ruth · CAS 07-23-15, Atmospheric Test Site T7-4 · CAS 07-23-16, Atmospheric Test Site B7-b · CAS 07-23-17, Atmospheric Test Site - Climax Closure activities began in October 2012 and were completed in April 2013. Activities were conducted according to the Corrective Action Decision Document/Corrective Action Plan for CAU 104. The corrective actions included No Further Action and Clean Closure. Closure activities generated sanitary waste, mixed waste, and recyclable material. Some wastes exceeded land disposal limits and required treatment prior to disposal. Other wastes met land disposal restrictions and were disposed in appropriate onsite landfills. The U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office (NNSA/NFO) requests the following: · A Notice of Completion from the Nevada Division of Environmental Protection to NNSA/NFO for closure of CAU 104 · The transfer of CAU 104 from Appendix III to Appendix IV, Closed Corrective Action Units, of the FFACO« less
Quantum Hall states and conformal field theory on a singular surface
NASA Astrophysics Data System (ADS)
Can, T.; Wiegmann, P.
2017-12-01
In Can et al (2016 Phys. Rev. Lett. 117), quantum Hall states on singular surfaces were shown to possess an emergent conformal symmetry. In this paper, we develop this idea further and flesh out details on the emergent conformal symmetry in holomorphic adiabatic states, which we define in the paper. We highlight the connection between the universal features of geometric transport of quantum Hall states and holomorphic dimension of primary fields in conformal field theory. In parallel we compute the universal finite-size corrections to the free energy of a critical system on a hyperbolic sphere with conical and cusp singularities, thus extending the result of Cardy and Peschel for critical systems on a flat cone (Cardy and Peschel 1988 Nucl. Phys. B 300 377-92), and the known results for critical systems on polyhedra and flat branched Riemann surfaces.
Yamauchi, Kazuto; Yamamura, Kazuya; Mimura, Hidekazu; Sano, Yasuhisa; Saito, Akira; Endo, Katsuyoshi; Souvorov, Alexei; Yabashi, Makina; Tamasaku, Kenji; Ishikawa, Tetsuya; Mori, Yuzo
2005-11-10
The intensity flatness and wavefront shape in a coherent hard-x-ray beam totally reflected by flat mirrors that have surface bumps modeled by Gaussian functions were investigated by use of a wave-optical simulation code. Simulated results revealed the necessity for peak-to-valley height accuracy of better than 1 nm at a lateral resolution near 0.1 mm to remove high-contrast interference fringes and appreciable wavefront phase errors. Three mirrors that had different surface qualities were tested at the 1 km-long beam line at the SPring-8/Japan Synchrotron Radiation Research Institute. Interference fringes faded when the surface figure was corrected below the subnanometer level to a spatial resolution close to 0.1 mm, as indicated by the simulated results.
NASA Technical Reports Server (NTRS)
Manro, M. E.; Manning, K. J. R.; Hallstaff, T. H.; Rogers, J. T.
1975-01-01
A wind tunnel test of an arrow-wing-body configuration consisting of flat and twisted wings, as well as a variety of leading- and trailing-edge control surface deflections, was conducted at Mach numbers from 0.4 to 1.1 to provide an experimental pressure data base for comparison with theoretical methods. Theory-to-experiment comparisons of detailed pressure distributions were made using current state-of-the-art attached and separated flow methods. The purpose of these comparisons was to delineate conditions under which these theories are valid for both flat and twisted wings and to explore the use of empirical methods to correct the theoretical methods where theory is deficient.
Time domain numerical calculations of unsteady vortical flows about a flat plate airfoil
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Yu, Ping; Scott, J. R.
1989-01-01
A time domain numerical scheme is developed to solve for the unsteady flow about a flat plate airfoil due to imposed upstream, small amplitude, transverse velocity perturbations. The governing equation for the resulting unsteady potential is a homogeneous, constant coefficient, convective wave equation. Accurate solution of the problem requires the development of approximate boundary conditions which correctly model the physics of the unsteady flow in the far field. A uniformly valid far field boundary condition is developed, and numerical results are presented using this condition. The stability of the scheme is discussed, and the stability restriction for the scheme is established as a function of the Mach number. Finally, comparisons are made with the frequency domain calculation by Scott and Atassi, and the relative strengths and weaknesses of each approach are assessed.
Bayesian correction of H(z) data uncertainties
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Gregório, T. M.; Andrade-Oliveira, F.; Valentim, R.; Matos, C. A. O.
2018-07-01
We compile 41 H(z) data from literature and use them to constrain OΛCDM and flat ΛCDM parameters. We show that the available H(z) suffers from uncertainties overestimation and propose a Bayesian method to reduce them. As a result of this method, using H(z) only, we find, in the context of OΛCDM, H0 = 69.5 ± 2.5 km s-1 Mpc-1, Ωm = 0.242 ± 0.036, and Ω _Λ =0.68± 0.14. In the context of flat ΛCDM model, we have found H0 = 70.4 ± 1.2 km s-1 Mpc-1 and Ωm = 0.256 ± 0.014. This corresponds to an uncertainty reduction of up to ≈ 30 per cent when compared to the uncorrected analysis in both cases.
Fire Impacts on the Mojave Desert Ecosystem: Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenstermaker Lynn
2012-01-01
The Nevada National Security Site (NNSS) is located within the Mojave Desert, which is the driest region in North America. Precipitation on the NNSS varies from an annual average of 130 millimeters (mm; 5.1 inches) with a minimum of 47 mm (1.9 inches) and maximum of 328 mm (12.9 inches) over the past 15 year period to an annual average of 205 mm (8.1 inches) with an annual minimum of 89 mm (3.5 inches) and maximum of 391 mm (15.4 inches) for the same time period; for a Frenchman Flat location at 970 meters (m; 3182 feet) and a Pahutemore » Mesa location at 1986 m (6516 feet), respectively. The combination of aridity and temperature extremes has resulted in sparsely vegetated basins (desert shrub plant communities) to moderately vegetated mountains (mixed coniferous forest plant communities); both plant density and precipitation increase with increasing elevation. Whereas some plant communities have evolved under fire regimes and are dependent upon fire for seed germination, plant communities within the Mojave Desert are not dependent on a fire regime and therefore are highly impacted by fire (Brown and Minnich, 1986; Brooks, 1999). As noted by Johansen (2003) natural range fires are not prevalent in the Mojave and Sonoran Deserts because there is not enough vegetation present (too many shrub interspaces) to sustain a fire. Fire research and hence publications addressing fires in the Southwestern United States (U.S.) have therefore focused on forest, shrub-steppe and grassland fires caused by both natural and anthropogenic ignition sources. In the last few decades, however, invasion of mid-elevation shrublands by non-native Bromus madritensis ssp. rubens and Bromus tectorum (Hunter, 1991) have been highly correlated with increased fire frequency (Brooks and Berry, 2006; Brooks and Matchett, 2006). Coupled with the impact of climate change, which has already been shown to be playing a role in increased forest fires (Westerling et al., 2006), it is likely that the fire frequency will further increase in the Mojave Desert (Knapp 1998; Smith et al., 1987; Smith et al., 2000).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geotechnical Sciences Group Bechtel Nevada
2006-01-01
A new three-dimensional hydrostratigraphic framework model for the Yucca Flat-Climax Mine Corrective Action Unit was completed in 2005. The model area includes Yucca Flat and Climax Mine, former nuclear testing areas at the Nevada Test Site, and proximal areas. The model area is approximately 1,250 square kilometers in size and is geologically complex. Yucca Flat is a topographically closed basin typical of many valleys in the Basin and Range province. Faulted and tilted blocks of Tertiary-age volcanic rocks and underlying Proterozoic and Paleozoic sedimentary rocks form low ranges around the structural basin. During the Cretaceous Period a granitic intrusive wasmore » emplaced at the north end of Yucca Flat. A diverse set of geological and geophysical data collected over the past 50 years was used to develop a structural model and hydrostratigraphic system for the basin. These were integrated using EarthVision? software to develop the 3-dimensional hydrostratigraphic framework model. Fifty-six stratigraphic units in the model area were grouped into 25 hydrostratigraphic units based on each unit's propensity toward aquifer or aquitard characteristics. The authors organized the alluvial section into 3 hydrostratigraphic units including 2 aquifers and 1 confining unit. The volcanic units in the model area are organized into 13 hydrostratigraphic units that include 8 aquifers and 5 confining units. The underlying pre-Tertiary rocks are divided into 7 hydrostratigraphic units, including 3 aquifers and 4 confining units. Other units include 1 Tertiary-age sedimentary confining unit and 1 Mesozoic-age granitic confining unit. The model depicts the thickness, extent, and geometric relationships of these hydrostratigraphic units (''layers'' in the model) along with the major structural features (i.e., faults). The model incorporates 178 high-angle normal faults of Tertiary age and 2 low-angle thrust faults of Mesozoic age. The complexity of the model area and the non-uniqueness of some of the interpretations incorporated into the base model made it necessary to formulate alternative interpretations for some of the major features in the model. Five of these alternatives were developed so they could be modeled in the same fashion as the base model. This work was done for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office in support of the Underground Test Area subproject of the Environmental Restoration Project.« less
Freshwater-Brine Mixing Zone Hydrodynamics in Salt Flats (Salar de Atacama)
NASA Astrophysics Data System (ADS)
Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.
2017-12-01
The increase in the demand of strategic minerals for the development of medicines and batteries require detailed knowledge of the salt flats freshwater-brine interface to make its exploitation efficient. The interface zone is the result of a physical balance between the recharged and evaporated water. The sharp interface approach assumes the immiscibility of the fluids and thus neglects the mixing between them. As a consequence, for miscible fluids it is more accurate and often needed to use the mixing zone concept, which results from the dynamic equilibrium of flowing freshwater and brine. In this study, we consider two and three-dimensional scale approaches for the management of the mixing zone. The two-dimensional approach is used to understand the dynamics and the characteristics of the salt flat mixing zone, especially in the Salar de Atacama (Atacama salt flat) case. By making use of this model we analyze and quantify the effects of the aquitards on the mixing zone geometry. However, the understanding of the complex physical processes occurring in the salt flats and the management of these environments requires the adoption of three-dimensional regional scale numerical models. The models that take into account the effects of variable density represent the best management tool, but they require large computational resources, especially in the three-dimensional case. In order to avoid these computational limitations in the modeling of salt flats and their valuable ecosystems, we propose a three-step methodology, consisting of: (1) collection, validation and interpretation of the hydrogeochemical data, (2) identification and three-dimensional mapping of the mixing zone on the land surface and in depth, and (3) application of a water head correction to the freshwater and mixed water heads in order to compensate the density variations and to transform them to brine water heads. Finally, an evaluation of the sensibility of the mixing zone to anthropogenic and climate changes is included.
Apparatus for and method of correcting for aberrations in a light beam
Sawicki, Richard H.
1996-01-01
A technique for adjustably correcting for aberrations in a light beam is disclosed herein. This technique utilizes first means which defines a flat, circular light reflecting surface having opposite reinforced circumferential edges and a central post and which is resiliently distortable, to a limited extent, into different concave and/or convex curvatures, which may be Gaussian-like, about the central axis, and second means acting on the first means for adjustably distorting the light reflecting surface into a particular selected one of the different curvatures depending upon the aberrations to be corrected for and for fixedly maintaining the curvature selected. In the embodiment disclosed, the light reflecting surface is adjustably distorted into the selected curvature by application of particular axial moments to the central post on the opposite side from the light reflecting surface and lateral moments to the circumference of the reflecting surface.
Apparatus for and method of correcting for aberrations in a light beam
Sawicki, R.H.
1996-09-17
A technique for adjustably correcting for aberrations in a light beam is disclosed herein. This technique utilizes first means which defines a flat, circular light reflecting surface having opposite reinforced circumferential edges and a central post and which is resiliently distortable, to a limited extent, into different concave and/or convex curvatures, which may be Gaussian-like, about the central axis, and second means acting on the first means for adjustably distorting the light reflecting surface into a particular selected one of the different curvatures depending upon the aberrations to be corrected for and for fixedly maintaining the curvature selected. In the embodiment disclosed, the light reflecting surface is adjustably distorted into the selected curvature by application of particular axial moments to the central post on the opposite side from the light reflecting surface and lateral moments to the circumference of the reflecting surface. 8 figs.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
NASA Astrophysics Data System (ADS)
Liu, Yu; He, Chuanbo
2015-12-01
In this discussion, the corrections to the errors found in the derivations and the numerical code of a recent analytical study (Zhou et al. Journal of Sound and Vibration 333 (7) (2014) 1972-1990) on sound transmission through double-walled cylindrical shells lined with poroelastic material are presented and discussed, as well as the further effect of the external mean flow on the transmission loss. After applying the corrections, the locations of the characteristic frequencies of thin shells remain unchanged, as well as the TL results above the ring frequency where BU and UU remain the best configurations in sound insulation performance. In the low-frequency region below the ring frequency, however, the corrections attenuate the TL amplitude significantly for BU and UU, and hence the BB configuration exhibits the best performance which is consistent with previous observations for flat sandwich panels.
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
NASA Astrophysics Data System (ADS)
Cheriton, O. M.; Storlazzi, C. D.; Rosenberger, K. J.; Quataert, E.; van Dongeren, A.
2014-12-01
The Republic of the Marshall Islands is comprised of 1156 islands on 29 low-lying atolls with a mean elevation of 2 m that are susceptible to sea-level rise and often subjected to overwash during large wave events. A 6-month deployment of wave and tide gauges across two shore-normal sections of north-facing coral reef on the Roi-Namur Island on Kwajalein Atoll was conducted during 2013-2014 to quantify wave dynamics and wave-driven water levels on the fringing coral reef. Wave heights and periods on the reef flat were strongly correlated to the water levels. On the fore reef, the majority of wave energy was concentrated in the incident band (5-25 s); due to breaking at the reef crest, however, the wave energy over the reef flat was dominated by infragravity-band (25-250 s) motions. Two large wave events with heights of 6-8 m at 15 s over the fore reef were observed. During these events, infragravity-band wave heights exceeded the incident band wave heights and approximately 1.0 m of set-up was established over the innermost reef flat. This set-up enabled the propagation of large waves across the reef flat, reaching maximum heights of nearly 2 m on the innermost reef flat adjacent to the toe of the beach. XBEACH models of the instrument transects were able to replicate the incident waves, infragravity waves, and wave-driven set-up across the reef when the hydrodynamic roughness of the reef was correctly parameterized. These events led to more than 3 m of wave-driven run-up and inundation of the island that drove substantial morphological change to the beach face.
Kory Westlund, Jacqueline M.; Jeong, Sooyeon; Park, Hae W.; Ronfard, Samuel; Adhikari, Aradhana; Harris, Paul L.; DeSteno, David; Breazeal, Cynthia L.
2017-01-01
Prior research with preschool children has established that dialogic or active book reading is an effective method for expanding young children’s vocabulary. In this exploratory study, we asked whether similar benefits are observed when a robot engages in dialogic reading with preschoolers. Given the established effectiveness of active reading, we also asked whether this effectiveness was critically dependent on the expressive characteristics of the robot. For approximately half the children, the robot’s active reading was expressive; the robot’s voice included a wide range of intonation and emotion (Expressive). For the remaining children, the robot read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range (Flat). The robot’s movements were kept constant across conditions. We performed a verification study using Amazon Mechanical Turk (AMT) to confirm that the Expressive robot was viewed as significantly more expressive, more emotional, and less passive than the Flat robot. We invited 45 preschoolers with an average age of 5 years who were either English Language Learners (ELL), bilingual, or native English speakers to engage in the reading task with the robot. The robot narrated a story from a picture book, using active reading techniques and including a set of target vocabulary words in the narration. Children were post-tested on the vocabulary words and were also asked to retell the story to a puppet. A subset of 34 children performed a second story retelling 4–6 weeks later. Children reported liking and learning from the robot a similar amount in the Expressive and Flat conditions. However, as compared to children in the Flat condition, children in the Expressive condition were more concentrated and engaged as indexed by their facial expressions; they emulated the robot’s story more in their story retells; and they told longer stories during their delayed retelling. Furthermore, children who responded to the robot’s active reading questions were more likely to correctly identify the target vocabulary words in the Expressive condition than in the Flat condition. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot. PMID:28638330
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y; Sharp, G
2014-06-15
Purpose: Gain calibration for X-ray imaging systems with movable flat panel detectors (FPD) and intrinsic crosshairs is a challenge due to the geometry dependence of the heel effect and crosshair artifact. This study aims to develop a gain correction method for such systems by implementing the multi-acquisition gain image correction (MAGIC) technique. Methods: Raw flat-field images containing crosshair shadows and heel effect were acquired in 4 different FPD positions with fixed exposure parameters. The crosshair region was automatically detected and substituted with interpolated values from nearby exposed regions, generating a conventional single-image gain-map for each FPD position. Large kernel-based correctionmore » was applied to these images to correct the heel effect. A mask filter was used to invalidate the original cross-hair regions previously filled with the interpolated values. A final, seamless gain-map was created from the processed images by either the sequential filling (SF) or selective averaging (SA) techniques developed in this study. Quantitative evaluation was performed based on detective quantum efficiency improvement factor (DQEIF) for gain-corrected images using the conventional and proposed techniques. Results: Qualitatively, the MAGIC technique was found to be more effective in eliminating crosshair artifacts compared to the conventional single-image method. The mean DQEIF over the range of frequencies from 0.5 to 3.5 mm-1 were 1.09±0.06, 2.46±0.32, and 3.34±0.36 in the crosshair-artifact region and 2.35±0.31, 2.33±0.31, and 3.09±0.34 in the normal region, for the conventional, MAGIC-SF, and MAGIC-SA techniques, respectively. Conclusion: The introduced MAGIC technique is appropriate for gain calibration of an imaging system associated with a moving FPD and an intrinsic crosshair. The technique showed advantages over a conventional single image-based technique by successfully reducing residual crosshair artifacts, and higher image quality with respect to DQE.« less
Visual acuity outcomes in eyes with flat corneas after PRK.
Varssano, David; Waisbourd, Michael; Minkev, Liza; Sela, Tzahi; Neudorfer, Meira; Binder, Perry S
2013-06-01
To evaluate the impact of corneal curvatures less than 35 diopters (D) after photorefractive keratectomy (PRK) on visual acuity outcomes. Visual acuity outcomes of 5,410 eyes that underwent PRK from January 2006 to November 2010 were retrospectively analyzed for the impact of postoperative corneal curvatures on visual outcomes. All procedures were performed on a single platform (Allegretto 200Hz excimer laser; Alcon Laboratories, Inc., Irvine, CA). Main outcome measures were postoperative corrected distance visual acuity (CDVA) and loss of CDVA. Corneas with a measured or a calculated postoperative flat meridian less than 35 D and those with a measured postoperative steep meridian less than 35 D had worse postoperative CDVA than corneas with meridians of either 35 D or more (P ≤ .021). However, the preoperative CDVA was worse in the flatter curvatures in all comparisons performed (P ≤ .024). Consequently, the measured or calculated meridian curvature had no effect on CDVA loss (P ≥ .074). Postoperative corneal keratometry values (flat and steep meridians) less than 35 D did not have a predictive effect on the risk of losing visual acuity following myopic PRK performed on the Allegretto 200Hz excimer laser. Copyright 2013, SLACK Incorporated.
Kolditz, Daniel; Meyer, Michael; Kyriakou, Yiannis; Kalender, Willi A
2011-01-07
In C-arm-based flat-detector computed tomography (FDCT) it frequently happens that the patient exceeds the scan field of view (SFOV) in the transaxial direction because of the limited detector size. This results in data truncation and CT image artefacts. In this work three truncation correction approaches for extended field-of-view (EFOV) reconstructions have been implemented and evaluated. An FDCT-based method estimates the patient size and shape from the truncated projections by fitting an elliptical model to the raw data in order to apply an extrapolation. In a camera-based approach the patient is sampled with an optical tracking system and this information is used to apply an extrapolation. In a CT-based method the projections are completed by artificial projection data obtained from the CT data acquired in an earlier exam. For all methods the extended projections are filtered and backprojected with a standard Feldkamp-type algorithm. Quantitative evaluations have been performed by simulations of voxelized phantoms on the basis of the root mean square deviation and a quality factor Q (Q = 1 represents the ideal correction). Measurements with a C-arm FDCT system have been used to validate the simulations and to investigate the practical applicability using anthropomorphic phantoms which caused truncation in all projections. The proposed approaches enlarged the FOV to cover wider patient cross-sections. Thus, image quality inside and outside the SFOV has been improved. Best results have been obtained using the CT-based method, followed by the camera-based and the FDCT-based truncation correction. For simulations, quality factors up to 0.98 have been achieved. Truncation-induced cupping artefacts have been reduced, e.g., from 218% to less than 1% for the measurements. The proposed truncation correction approaches for EFOV reconstructions are an effective way to ensure accurate CT values inside the SFOV and to recover peripheral information outside the SFOV.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Holographic dark energy with varying gravitational constant in Hořava-Lifshitz cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setare, M.R.; Jamil, Mubasher, E-mail: rezakord@ipm.ir, E-mail: mjamil@camp.nust.edu.pk
2010-02-01
We investigate the holographic dark energy scenario with a varying gravitational constant in a flat background in the context of Hořava-Lifshitz gravity. We extract the exact differential equation determining the evolution of the dark energy density parameter, which includes G variation term. Also we discuss a cosmological implication of our work by evaluating the dark energy equation of state for low redshifts containing varying G corrections.
Study Of Pre-Shaped Membrane Mirrors And Electrostatic Mirrors With Nonlinear-Optical Correction
2002-01-01
mirrors have been manufactured of glass-like material Zerodur with very low coefficient of linear expansion. They have a more light cellular construction...primary and flat secondary mirrors are both segmented ones. In the case of the primary mirror made of traditional materials such as Zerodur or fused...FINAL REPORT ISTC Project #2103p “Study of Pre-Shaped Membrane Mirrors and Electrostatic Mirrors with Nonlinear-Optical Correction” Manager
[Intoxication of botulinum toxin].
Chudzicka, Aleksandra
2015-09-01
Botulinum toxin is an egzotoxin produced by Gram positive bacteria Clostridium botulinum. It is among the most potent toxins known. The 3 main clinical presentations of botulism are as follows: foodborne botulism, infant botulism and wound botulism. The main symptom of intoxication is flat muscles paralysis. The treatment is supportive care and administration of antitoxin. In prevention the correct preparing of canned food is most important. Botulinum toxin is accepted as a biological weapon. © 2015 MEDPRESS.
Magnetotelluric Data, Across Quartzite Ridge, Nevada Test Site, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.M. Williams; B.D. Rodriguez, and T.H. Asch
2005-11-23
Nuclear weapons are integral to the defense of the United States. The U.S. Department of Energy, as the steward of these devices, must continue to gauge the efficacy of the individual weapons. This could be accomplished by occasional testing at the Nevada Test Site (NTS) in Nevada, northwest of Las Vegas. Yucca Flat Basin is one of the testing areas at the NTS. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow in the area subsequent to a nuclear test. Ground-water modelers would like to know more about themore » hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS) collected and processed Magnetotelluric (MT) and Audio-magnetotelluric (AMT) data at the Nevada Test Site in and near Yucca Flat to help characterize this pre-Tertiary geology. That work will help to define the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (UCCU) in the Yucca Flat area. Interpretation will include a three-dimensional (3-D) character analysis and two-dimensional (2-D) resistivity model. The purpose of this report is to release the MT soundings across Quartzite Ridge, Profiles 5, 6a, and 6b, as shown in Figure 1. No interpretation of the data is included here.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiSalvo, Rick; Surovchak, Scott; Spreng, Carl
2013-07-01
Cleanup and closure of DOE's Rocky Flats Site in Colorado, which was placed on the CERCLA National Priority List in 1989, was accomplished under CERCLA, RCRA, and the Colorado Hazardous Waste Act (CHWA). The physical cleanup work was completed in late 2005 and all buildings and other structures that composed the Rocky Flats industrial complex were removed from the surface, but remnants remain in the subsurface. Other remaining features include two landfills closed in place with covers, four groundwater treatment systems, and surface water and groundwater monitoring systems. Under the 2006 Corrective Action Decision/Record of Decision for Rocky Flats Plantmore » (US DOE) Peripheral Operable Unit and the Central Operable Unit (CAD/ROD), the response actions selected for the Central Operable Unit (OU) are institutional controls (ICs), physical controls, and continued monitoring and maintenance. The objectives of these ICs were to prevent unacceptable exposure to remaining subsurface contamination and to prevent contaminants from mobilizing to surface water and to prevent interfering with the proper functioning of the engineered components of the remedy. An amendment in 2011 of the 2006 CAD/ROD clarified the ICs to prevent misinterpretation that would prohibit work to manage and maintain the Central OU property. The 2011 amendment incorporated a protocol for a Soil Disturbance Review Plan for work subject to ICs that requires approval from the State and public notification by DOE prior to conducting approved soil-disturbing work. (authors)« less
Three-dimensional surface profile intensity correction for spatially modulated imaging
NASA Astrophysics Data System (ADS)
Gioux, Sylvain; Mazhar, Amaan; Cuccia, David J.; Durkin, Anthony J.; Tromberg, Bruce J.; Frangioni, John V.
2009-05-01
We describe a noncontact profile correction technique for quantitative, wide-field optical measurement of tissue absorption (μa) and reduced scattering (μs') coefficients, based on geometric correction of the sample's Lambertian (diffuse) reflectance intensity. Because the projection of structured light onto an object is the basis for both phase-shifting profilometry and modulated imaging, we were able to develop a single instrument capable of performing both techniques. In so doing, the surface of the three-dimensional object could be acquired and used to extract the object's optical properties. The optical properties of flat polydimethylsiloxane (silicone) phantoms with homogenous tissue-like optical properties were extracted, with and without profilometry correction, after vertical translation and tilting of the phantoms at various angles. Objects having a complex shape, including a hemispheric silicone phantom and human fingers, were acquired and similarly processed, with vascular constriction of a finger being readily detectable through changes in its optical properties. Using profilometry correction, the accuracy of extracted absorption and reduced scattering coefficients improved from two- to ten-fold for surfaces having height variations as much as 3 cm and tilt angles as high as 40 deg. These data lay the foundation for employing structured light for quantitative imaging during surgery.
Instant preheating in quintessential inflation with α -attractors
NASA Astrophysics Data System (ADS)
Dimopoulos, Konstantinos; Wood, Leonora Donaldson; Owen, Charlotte
2018-03-01
We investigate a compelling model of quintessential inflation in the context of α -attractors, which naturally result in a scalar potential featuring two flat regions; the inflationary plateau and the quintessential tail. The "asymptotic freedom" of α -attractors, near the kinetic poles, suppresses radiative corrections and interactions, which would otherwise threaten to lift the flatness of the quintessential tail and cause a 5th-force problem respectively. Since this is a nonoscillatory inflation model, we reheat the Universe through instant preheating. The parameter space is constrained by both inflation and dark energy requirements. We find an excellent correlation between the inflationary observables and model predictions, in agreement with the α -attractors setup. We also obtain successful quintessence for natural values of the parameters. Our model predicts potentially sizeable tensor perturbations (at the level of 1%) and a slightly varying equation of state for dark energy, to be probed in the near future.
Modeling the VARTM Composite Manufacturing Process
NASA Technical Reports Server (NTRS)
Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal
2004-01-01
A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.
Measurement of the noise power spectrum in digital x-ray detectors
NASA Astrophysics Data System (ADS)
Aufrichtig, Richard; Su, Yu; Cheng, Yu; Granfors, Paul R.
2001-06-01
The noise power spectrum, NPS, is a key imaging property of a detector and one of the principle quantities needed to compute the detective quantum efficiency. NPS is measured by computing the Fourier transform of flat field images. Different measurement methods are investigated and evaluated with images obtained from an amorphous silicon flat panel x-ray imaging detector. First, the influence of fixed pattern structures is minimized by appropriate background corrections. For a given data set the effect of using different types of windowing functions is studied. Also different window sizes and amounts of overlap between windows are evaluated and compared to theoretical predictions. Results indicate that measurement error is minimized when applying overlapping Hanning windows on the raw data. Finally it is shown that radial averaging is a useful method of reducing the two-dimensional noise power spectrum to one dimension.
Battaglia, Maurizio; ,; Peter, F.; Murray, Jessica R.
2013-01-01
This manual provides the physical and mathematical concepts for selected models used to interpret deformation measurements near active faults and volcanic centers. The emphasis is on analytical models of deformation that can be compared with data from the Global Positioning System (GPS) receivers, Interferometric synthetic aperture radar (InSAR), leveling surveys, tiltmeters and strainmeters. Source models include pressurized spherical, ellipsoidal, and horizontal penny-shaped geometries in an elastic, homogeneous, flat half-space. Vertical dikes and faults are described following the mathematical notation for rectangular dislocations in an elastic, homogeneous, flat half-space. All the analytical expressions were verified against numerical models developed by use of COMSOL Multyphics, a Finite Element Analysis software (http://www.comsol.com). In this way, typographical errors present were identified and corrected. Matlab scripts are also provided to facilitate the application of these models.
Fabrikant, I.; Karapetian, E.; Kalinin, S. V.
2017-12-09
Here, we consider the problem of an arbitrary shaped rigid punch pressed against the boundary of a transversely isotropic half-space and interacting with an arbitrary flat crack or inclusion, located in the plane parallel to the boundary. The set of governing integral equations is derived for the most general conditions, namely the presence of both normal and tangential stresses under the punch, as well as general loading of the crack faces. In order to verify correctness of the derivations, two different methods were used to obtain governing integral equations: generalized method of images and utilization of the reciprocal theorem. Bothmore » methods gave the same results. Axisymmetric coaxial case of interaction between a rigid inclusion and a flat circular punch both centered along the z-axis is considered as an illustrative example. Most of the final results are presented in terms of elementary functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabrikant, I.; Karapetian, E.; Kalinin, S. V.
Here, we consider the problem of an arbitrary shaped rigid punch pressed against the boundary of a transversely isotropic half-space and interacting with an arbitrary flat crack or inclusion, located in the plane parallel to the boundary. The set of governing integral equations is derived for the most general conditions, namely the presence of both normal and tangential stresses under the punch, as well as general loading of the crack faces. In order to verify correctness of the derivations, two different methods were used to obtain governing integral equations: generalized method of images and utilization of the reciprocal theorem. Bothmore » methods gave the same results. Axisymmetric coaxial case of interaction between a rigid inclusion and a flat circular punch both centered along the z-axis is considered as an illustrative example. Most of the final results are presented in terms of elementary functions.« less
Hypercalibration: A Pan-STARRS1-Based Recalibration of the Sloan Digital Sky Survey Photometry
Finkbeiner, Douglas P.; Schlafly, Edward F.; Schlegel, David J.; ...
2016-05-05
In this paper, we present a recalibration of the Sloan Digital Sky Survey (SDSS) photometry with new flat fields and zero points derived from Pan-STARRS1. Using point-spread function (PSF) photometry of 60 million stars with 16 < r < 20, we derive a model of amplifier gain and flat-field corrections with per-run rms residuals of 3 millimagnitudes (mmag) in griz bands and 15 mmag in u band. The new photometric zero points are adjusted to leave the median in the Galactic north unchanged for compatibility with previous SDSS work. We also identify transient non-photometric periods in SDSS ("contrails") based onmore » photometric deviations co-temporal in SDSS bands. Finally, the recalibrated stellar PSF photometry of SDSS and PS1 has an rms difference of {9, 7, 7, 8} mmag in griz, respectively, when averaged over 15' regions.« less
Runner's knowledge of their foot type: do they really know?
Hohmann, Erik; Reaburn, Peter; Imhoff, Andreas
2012-09-01
The use of correct individually selected running shoes may reduce the incidence of running injuries. However, the runner needs to be aware of their foot anatomy to ensure the "correct" footwear is chosen. The purpose of this study was to compare the individual runner's knowledge of their arch type to the arch index derived from a static footprint. We examined 92 recreational runners with a mean age of 35.4±11.4 (12-63) years. A questionnaire was used to investigate the knowledge of the runners about arch height and overpronation. A clinical examination was undertaken using defined criteria and the arch index was analysed using weight-bearing footprints. Forty-five runners (49%) identified their foot arch correctly. Eighteen of the 41 flat-arched runners (44%) identified their arch correctly. Twenty-four of the 48 normal-arched athletes (50%) identified their arch correctly. Three subjects with a high arch identified their arch correctly. Thirty-eight runners assessed themselves as overpronators; only four (11%) of these athletes were positively identified. Of the 34 athletes who did not categorize themselves as overpronators, four runners (12%) had clinical overpronation. The findings of this research suggest that runners possess poor knowledge of both their foot arch and dynamic pronation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Improved nine-node shell element MITC9i with reduced distortion sensitivity
NASA Astrophysics Data System (ADS)
Wisniewski, K.; Turska, E.
2017-11-01
The 9-node quadrilateral shell element MITC9i is developed for the Reissner-Mindlin shell kinematics, the extended potential energy and Green strain. The following features of its formulation ensure an improved behavior: 1. The MITC technique is used to avoid locking, and we propose improved transformations for bending and transverse shear strains, which render that all patch tests are passed for the regular mesh, i.e. with straight element sides and middle positions of midside nodes and a central node. 2. To reduce shape distortion effects, the so-called corrected shape functions of Celia and Gray (Int J Numer Meth Eng 20:1447-1459, 1984) are extended to shells and used instead of the standard ones. In effect, all patch tests are passed additionally for shifts of the midside nodes along straight element sides and for arbitrary shifts of the central node. 3. Several extensions of the corrected shape functions are proposed to enable computations of non-flat shells. In particular, a criterion is put forward to determine the shift parameters associated with the central node for non-flat elements. Additionally, the method is presented to construct a parabolic side for a shifted midside node, which improves accuracy for symmetric curved edges. Drilling rotations are included by using the drilling Rotation Constraint equation, in a way consistent with the additive/multiplicative rotation update scheme for large rotations. We show that the corrected shape functions reduce the sensitivity of the solution to the regularization parameter γ of the penalty method for this constraint. The MITC9i shell element is subjected to a range of linear and non-linear tests to show passing the patch tests, the absence of locking, very good accuracy and insensitivity to node shifts. It favorably compares to several other tested 9-node elements.
van Gijn, Jan; Gijselhart, Joost P
2011-01-01
Unlike his eponymous fame suggests, Sir Charles Bell (1774-1842) was an anatomist, draughtsman and surgeon rather than purely a physiologist. He was born and educated in Edinburgh but spent most of his working life in London (1804 to 1836). It was there he started a School of Anatomy, alongside a fledgling surgical practice, just as his elder brother John had done in Edinburgh. In 1814 he joined the surgical staff at the Middlesex Hospital. In 1810 he surmised from occasional animal experiments that the anterior and posterior spinal roots differed in function. Yet it was left to the Frenchman Magendie to identify that these functions were motor and sensory: a discovery that induced Bell into an ungentlemanly feud. Bell also slightly erred on the functions of the trigeminal and facial nerve, but his description of the features of idiopathic facial palsy is unrivalled.
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
NASA Astrophysics Data System (ADS)
Komatsu, Nobuyoshi
2017-11-01
A power-law corrected entropy based on a quantum entanglement is considered to be a viable black-hole entropy. In this study, as an alternative to Bekenstein-Hawking entropy, a power-law corrected entropy is applied to Padmanabhan's holographic equipartition law to thermodynamically examine an extra driving term in the cosmological equations for a flat Friedmann-Robertson-Walker universe at late times. Deviations from the Bekenstein-Hawking entropy generate an extra driving term (proportional to the α th power of the Hubble parameter, where α is a dimensionless constant for the power-law correction) in the acceleration equation, which can be derived from the holographic equipartition law. Interestingly, the value of the extra driving term in the present model is constrained by the second law of thermodynamics. From the thermodynamic constraint, the order of the driving term is found to be consistent with the order of the cosmological constant measured by observations. In addition, the driving term tends to be constantlike when α is small, i.e., when the deviation from the Bekenstein-Hawking entropy is small.
NASA Astrophysics Data System (ADS)
Rezzolla, L.; Ahmedov, B. J.; Miller, J. C.
2001-04-01
We present analytic solutions of Maxwell equations in the internal and external background space-time of a slowly rotating magnetized neutron star. The star is considered isolated and in vacuum, with a dipolar magnetic field not aligned with the axis of rotation. With respect to a flat space-time solution, general relativity introduces corrections related both to the monopolar and the dipolar parts of the gravitational field. In particular, we show that in the case of infinite electrical conductivity general relativistic corrections resulting from the dragging of reference frames are present, but only in the expression for the electric field. In the case of finite electrical conductivity, however, corrections resulting from both the space-time curvature and the dragging of reference frames are shown to be present in the induction equation. These corrections could be relevant for the evolution of the magnetic fields of pulsars and magnetars. The solutions found, while obtained through some simplifying assumption, reflect a rather general physical configuration and could therefore be used in a variety of astrophysical situations.
Handling Golgi-impregnated tissue for light microscopy.
Berbel, P J; Fairén, A
1983-08-08
The use of cyanocrylic glue to fix pieces of Golgi-stained nervous tissue on a paraffin blank is proposed for obtaining thick sections of unembedded tissue with a sliding microtome. This procedure makes correct orientation of the tissue easy during sectioning and makes it possible to obtain tissue sections quickly. The sections are flat-mounted using epoxy resin, resulting in permanent preparations with excellent optical properties and enabling further thin-sectioning for light and electron microscopic studies.
Proof of the Feasibility of Coherent and Incoherent Schemes for Pumping a Gamma-Ray Laser
1989-07-01
compounds held in plastic vials or cylindrical planchettes . Foils and planchertes were exposed with their faces normal to the machine center- line. The...irradiation; foils and planchettes were counted with a solid NaI(TI) detector system and vials were again studied with the well detector. Samples...P to flat planchettes , and F to metallic foils. The self-absorption corrections represent the fraction of fluorescent photons which reach the
Tune variations in the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Aquilina, N.; Giovannozzi, M.; Lamont, M.; Sammut, N.; Steinhagen, R.; Todesco, E.; Wenninger, J.
2015-04-01
The horizontal and vertical betatron tunes of the Large Hadron Collider (LHC) mainly depend on the strength of the quadrupole magnets, but are also affected by the quadrupole component in the main dipoles. In case of systematic misalignments, the sextupole component from the main dipoles and sextupole corrector magnets also affect the tunes due to the feed down effect. During the first years of operation of the LHC, the tunes have been routinely measured and corrected through either a feedback or a feed forward system. In this paper, the evolution of the tunes during injection, ramp and flat top are reconstructed from the beam measurements and the settings of the tune feedback loop and of the feed forward corrections. This gives the obtained precision of the magnetic model of the machine with respect to quadrupole and sextupole components. Measurements at the injection plateau show an unexpected large decay whose origin is not understood. This data is discussed together with the time constants and the dependence on previous cycles. We present results of dedicated experiments that show that this effect does not originate from the decay of the main dipole component. During the ramp, the tunes drift by about 0.022. It is shown that this is related to the precision of tracking the quadrupole field in the machine and this effect is reduced to about 0.01 tune units during flat top.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office
This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 528, Polychlorinated Biphenyls Contamination (PCBs), Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. Located in the southwestern portion of Area 25 on the NTS in Jackass Flats (adjacent to Test Cell C [TCC]), CAU 528 consists of Corrective Action Site 25-27-03, Polychlorinated Biphenyls Surface Contamination. Test Cell C was built to support the Nuclear Rocket Development Stationmore » (operational between 1959 and 1973) activities including conducting ground tests and static firings of nuclear engine reactors. Although CAU 528 was not considered as a direct potential source of PCBs and petroleum contamination, two potential sources of contamination have nevertheless been identified from an unknown source in concentrations that could potentially pose an unacceptable risk to human health and/or the environment. This CAU's close proximity to TCC prompted Shaw to collect surface soil samples, which have indicated the presence of PCBs extending throughout the area to the north, east, south, and even to the edge of the western boundary. Based on this information, more extensive field investigation activities are being planned, the results of which are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document.« less
Balance and gait in children with dyslexia.
Moe-Nilssen, Rolf; Helbostad, Jorunn L; Talcott, Joel B; Toennessen, Finn Egil
2003-05-01
Tests of postural stability have provided some evidence of a link between deficits in gross motor skills and developmental dyslexia. The ordinal-level scales used previously, however, have limited measurement sensitivity, and no studies have investigated motor performance during walking in participants with dyslexia. The purpose of this study was to investigate if continuous-scaled measures of standing balance and gait could discriminate between groups of impaired and normal readers when investigators were blind to group membership during testing. Children with dyslexia ( n=22) and controls ( n=18), aged 10-12 years, performed walking tests at four different speeds (slow-preferred-fast-very fast) on an even and an uneven surface, and tests of unperturbed and perturbed body sway during standing. Body movements were registered by a triaxial accelerometer over the lower trunk, and measures of reaction time, body sway, walking speed, step length and cadence were calculated. Results were controlled for gender differences. Tests of standing balance with eyes closed did not discriminate between groups. All unperturbed standing tests with eyes open showed significant group differences ( P<0.05) and classified correctly 70-77.5% of the subjects into their respective groups. Mean walking speed during very fast walking on both flat and uneven surface was > or =0.2 m/s ( P< or =0.01) faster for controls than for the group with dyslexia. This test classified 77.5% and 85% of the subjects correctly on flat and uneven surface, respectively. Cadence at preferred or very fast speed did not differ statistically between groups, but revealed significant group differences when all subjects were compared at a normalised walking speed ( P< or =0.04). Very fast walking speed as well as cadence at a normalised speed discriminated better between groups when subjects were walking on an uneven surface compared to a flat floor. Continuous-scaled walking tests performed in field settings may be suitable for motor skill assessment as a component of a screening tool for developmental dyslexia.
NASA Astrophysics Data System (ADS)
Jönsson, H.; Ryde, N.; Nissen, P. E.; Collet, R.; Eriksson, K.; Asplund, M.; Gustafsson, B.
2011-06-01
Context. It is still debated whether or not the Galactic chemical evolution of sulphur in the halo follows the flat trend with [Fe/H] that is ascribed to the result of explosive nucleosynthesis in type II SNe. It has been suggested that the disagreement between different investigations of sulphur abundances in halo stars might be owing to problems with the diagnostics used, that a new production source of sulphur might be needed in the early Universe, like hypernovae, or that the deposition of supernova ejecta into the interstellar medium is time-delayed. Aims: The aim of this study is to try to clarify this situation by measuring the sulphur abundance in a sample of halo giants using two diagnostics: the S i triplet around 1045 nm and the [S i] line at 1082 nm. The latter of the two is not believed to be sensitive to non-LTE effects. We can thereby minimize the uncertainties in the diagnostic used and estimate the usefulness of the triplet for the sulphur determination in halo K giants. We will also be able to compare our sulphur abundance differences from the two diagnostics with the expected non-LTE effects in the 1045 nm triplet previously calculated by others. Methods: High-resolution near-infrared spectra of ten K giants were recorded using the spectrometer CRIRES mounted at VLT. Two standard settings were used, one covering the S i triplet and one covering the [S i] line. The sulphur abundances were individually determined with equivalent widths and synthetic spectra for the two diagnostics using tailored 1D model atmospheres and relying on non-LTE corrections from the litterature. Effects of convective inhomogeneities in the stellar atmospheres are investigated. Results: The sulphur abundances derived from both the [S i] line and the non-LTE corrected 1045 nm triplet favor a flat trend for the evolution of sulphur. In contrast to some previous studies, we saw no "high" values of [S/Fe] in our sample. Conclusions: We corroborate the flat trend in the [S/Fe] vs. [Fe/H] plot for halo stars found in some previous studies but do not find a scatter or a rise in [S/Fe] as obtained in other works. We find the sulphur abundances deduced from the non-LTE corrected triplet to be somewhat lower than the abundances from the [S i] line, possibly indicating too large non-LTE corrections. Considering 3D modeling, however, they might instead be too small. Moreover, we show that the [S i] line can be used as a sulphur diagnostic down to [Fe/H] ~ -2.3 in giants. Based on observations collected at the European Southern Observatory, Chile (ESO program 080.D-0675(A)).
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Technical Reports Server (NTRS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-01-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Astrophysics Data System (ADS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-12-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
NASA Technical Reports Server (NTRS)
Markey, Melvin F.
1959-01-01
A theory is derived for determining the loads and motions of a deeply immersed prismatic body. The method makes use of a two-dimensional water-mass variation and an aspect-ratio correction for three-dimensional flow. The equations of motion are generalized by using a mean value of the aspect-ratio correction and by assuming a variation of the two-dimensional water mass for the deeply immersed body. These equations lead to impact coefficients that depend on an approach parameter which, in turn, depends upon the initial trim and flight-path angles. Comparison of experiment with theory is shown at maximum load and maximum penetration for the flat-bottom (0 deg dead-rise angle) model with bean-loading coefficients from 36.5 to 133.7 over a wide range of initial conditions. A dead-rise angle correction is applied and maximum-load data are compared with theory for the case of a model with 300 dead-rise angle and beam-loading coefficients from 208 to 530.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
Range Compressed Holographic Aperture Ladar
2017-06-01
prescribed phase and the phase correction estimate given by the PGA estimator, respectively. Finally, 50 trials were run over which a new random draw of...target mounted to the rotation stage and tilted vertically away from the sensor by 40o. The target consists of 36 aluminum blades (360 mm X 25.4 mm X...1.57 mm), stacked and rotated by 5° each. A flat surface finish was achieved by lightly sandblasting the blades before assembly. By design, this is a
Earth albedo neutrons from 10 to 100 MeV.
NASA Technical Reports Server (NTRS)
Preszler, A. M.; Simnett, G. M.; White, R. S.
1972-01-01
We report the measurement of the energy and angular distributions of earth albedo neutrons from 10 to 100 MeV at 40 deg N geomagnetic latitude from a balloon at 120,000 ft, below 4.65 g/sq cm. The albedo-neutron omnidirectional energy distribution is flat to 50 MeV, then decreases with energy. The absolute neutron energy distribution is of the correct strength and shape for the albedo neutrons to be the source of the protons trapped in earth's inner radiation belt.
Loop-corrected Virasoro symmetry of 4D quantum gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, T.; Kapec, D.; Raclariu, A.
Recently a boundary energy-momentum tensor T zz has been constructed from the soft graviton operator for any 4D quantum theory of gravity in asymptotically flat space. Up to an “anomaly” which is one-loop exact, T zz generates a Virasoro action on the 2D celestial sphere at null infinity. Here we show by explicit construction that the effects of the IR divergent part of the anomaly can be eliminated by a one-loop renormalization that shifts T zz .
Loop-corrected Virasoro symmetry of 4D quantum gravity
He, T.; Kapec, D.; Raclariu, A.; ...
2017-08-16
Recently a boundary energy-momentum tensor T zz has been constructed from the soft graviton operator for any 4D quantum theory of gravity in asymptotically flat space. Up to an “anomaly” which is one-loop exact, T zz generates a Virasoro action on the 2D celestial sphere at null infinity. Here we show by explicit construction that the effects of the IR divergent part of the anomaly can be eliminated by a one-loop renormalization that shifts T zz .
Toxicity of phosphor esters: Willy Lange (1900-1976) and Gerda von Krueger (1907-after 1970).
Petroianu, G A
2010-10-01
In 1851 Williamson serendipitously discovered a new and efficient way to produce ethers using ethyl iodide and potassium salts. Based on this new synthetic approach, the Frenchman Philippe de Clermont and the Muscovite Wladimir Moschnin, both élèves of Adolphe Wurtz in his Paris School of Chemistry, achieved the synthesis of the first ester of pyrophosphoric acid (TEPP). de Clermont "tasted" the new compound and although TEPP is a potent cholinesterase inhibitor he failed to recognize its toxicity. Almost a century later, in 1932, Willy Lange (1900-1976) and his graduate student Gerda v. Krueger (1907-after 1970) described the toxicity of organophosphonates. While the classic paper of the two "Uber Ester der Monofluorphosphorsäure." is cited by almost everybody working in the field, little is known about Lange and almost nothing about v. Krueger. This brief communication attempts to shed some light on the life of both.
Alexis Carrel: genius, innovator and ideologist.
Dutkowski, P; de Rougemont, O; Clavien, P-A
2008-10-01
Alexis Carrel was a Frenchman from Lyon, who gained fame at the Rockefeller Institute in New York at the beginning of the 20th century. He was the first to demonstrate that arteriovenous anastomoses were possible. Alexis Carrel was awarded the Nobel Prize for his contributions to vascular surgery and transplantation in 1912. He was a versatile scientist, who made numerous discoveries from the design of an antiseptic solution to treat injuries during the First World War to tissue culture and engineering, and organ preservation, making him the father of solid organ transplantation. Together, with the famous aviator and engineer Charles Lindbergh, they were the first scientists capable of keeping an entire organ alive outside of the body, using a perfusion machine. Due to his many dubious ideas and his association with fascism in the 1930s and during the Second World War, many of his scientific achievements have been forgotten today and taken for granted.
[Means and methods of personal hygiene in the experiment with 520-day isolation].
Shumilina, G A; Shumilina, I V; Solov'eva, S O
2013-01-01
Six volunteers (3 Russians, a Frenchman, an Italian and a Chinese) participated in assessment of the input of sanitation and housekeeping provisions to their wellbeing during 520-day isolation and confinement. Subject of the study was quality and sufficiency of housekeeping agents and procedures as well as more than 60 names of personal hygiene items. The sanitation and housekeeping monitoring involved the clinical, hygienic and microbiological methods, and also consideration of crew comments on the items at their disposal and recommended procedures. Based on the analysis of the functional condition of the integument and oral cavity and entries in the questionnaires, i.e. objective data and subjective feelings, all test subjects remained in the invariably good state. Owing to the application of the selected hygienic means and methods the microbial status of the crew was stable throughout 520-day isolation.
NASA Astrophysics Data System (ADS)
Wells, Jered R.; Segars, W. Paul; Kigongo, Christopher J. N.; Dobbins, James T., III
2011-03-01
This paper describes a recently developed post-acquisition motion correction strategy for application to lower-cost computed tomography (LCCT) for under-resourced regions of the world. Increased awareness regarding global health and its challenges has encouraged the development of more affordable healthcare options for underserved people worldwide. In regions such as sub-Saharan Africa, intermediate level medical facilities may serve millions with inadequate or antiquated equipment due to financial limitations. In response, the authors have proposed a LCCT design which utilizes a standard chest x-ray examination room with a digital flat panel detector (FPD). The patient rotates on a motorized stage between the fixed cone-beam source and FPD, and images are reconstructed using a Feldkamp algorithm for cone-beam scanning. One of the most important proofs-of-concept in determining the feasibility of this system is the successful correction of undesirable motion. A 3D motion correction algorithm was developed in order to correct for potential patient motion, stage instabilities and detector misalignments which can all lead to motion artifacts in reconstructed images. Motion will be monitored by the radiographic position of fiducial markers to correct for rigid body motion in three dimensions. Based on simulation studies, projection images corrupted by motion were re-registered with average errors of 0.080 mm, 0.32 mm and 0.050 mm in the horizontal, vertical and depth dimensions, respectively. The overall absence of motion artifacts in motion-corrected reconstructions indicates that reasonable amounts of motion may be corrected using this novel technique without significant loss of image quality.
NASA Astrophysics Data System (ADS)
Goon, Garrett
2017-01-01
We study the effects of heavy fields on 4D spacetimes with flat, de Sitter and anti-de Sitter asymptotics. At low energies, matter generates specific, calculable higher derivative corrections to the GR action which perturbatively alter the Schwarzschild-( A) dS family of solutions. The effects of massive scalars, Dirac spinors and gauge fields are each considered. The six-derivative operators they produce, such as ˜ R 3 terms, generate the leading corrections. The induced changes to horizon radii, Hawking temperatures and entropies are found. Modifications to the energy of large AdS black holes are derived by imposing the first law. An explicit demonstration of the replica trick is provided, as it is used to derive black hole and cosmological horizon entropies. Considering entropy bounds, it's found that scalars and fermions increase the entropy one can store inside a region bounded by a sphere of fixed size, but vectors lead to a decrease, oddly. We also demonstrate, however, that many of the corrections fall below the resolving power of the effective field theory and are therefore untrustworthy. Defining properties of black holes, such as the horizon area and Hawking temperature, prove to be remarkably robust against higher derivative gravitational corrections.
Quantum corrections to the stress-energy tensor in thermodynamic equilibrium with acceleration
NASA Astrophysics Data System (ADS)
Becattini, F.; Grossi, E.
2015-08-01
We show that the stress-energy tensor has additional terms with respect to the ideal form in states of global thermodynamic equilibrium in flat spacetime with nonvanishing acceleration and vorticity. These corrections are of quantum origin and their leading terms are second order in the gradients of the thermodynamic fields. Their relevant coefficients can be expressed in terms of correlators of the stress-energy tensor operator and the generators of the Lorentz group. With respect to previous assessments, we find that there are more second-order coefficients and that all thermodynamic functions including energy density receive acceleration and vorticity dependent corrections. Notably, also the relation between ρ and p , that is, the equation of state, is affected by acceleration and vorticity. We have calculated the corrections for a free real scalar field—both massive and massless—and we have found that they increase, particularly for a massive field, at very high acceleration and vorticity and very low temperature. Finally, these nonideal terms depend on the explicit form of the stress-energy operator, implying that different stress-energy tensors of the scalar field—canonical or improved—are thermodynamically inequivalent.
Su, Shonglun; Mo, Zhongjun; Guo, Junchao; Fan, Yubo
2017-01-01
Flat foot is one of the common deformities in the youth population, seriously affecting the weight supporting and daily exercising. However, there is lacking of quantitative data relative to material selection and shape design of the personalized orthopedic insole. This study was to evaluate the biomechanical effects of material hardness and support height of personalized orthopedic insole on foot tissues, by in vivo experiment and finite element modeling. The correction of arch height increased with material hardness and support height. The peak plantar pressure increased with the material hardness, and these values by wearing insoles of 40° were apparently higher than the bare feet condition. Harder insole material results in higher stress in the joint and ligament stress than softer material. In the calcaneocuboid joint, the stress increased with the arch height of insoles. The material hardness did not apparently affect the stress in the ankle joints, but the support heights of insole did. In general, insole material and support design are positively affecting the correction of orthopedic insole, but negatively resulting in unreasonable stress on the stress in the joint and ligaments. There should be an integration of improving correction and reducing stress in foot tissues.
Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien
2018-01-01
Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with P<0.01. The optimism-corrected AUC for these 8 features is 0.939. Our novel radiomic LDCT-based approach for indeterminate screen-detected nodule characterization appears extremely promising however independent external validation is needed.
Salt attack in parking garage in block of flats
NASA Astrophysics Data System (ADS)
Beran, Pavel; Frankeová, Dita; Pavlík, Zbyšek
2017-07-01
In recent years many new block of flats with parking garages placed inside the buildings were constructed. This tendency brings beyond question benefits for residents and also for city planning, but it requires new design and structural approaches and advanced material and construction solutions. The analysis of plaster damage on partition wall in parking garage in one of these buildings is presented in the paper. The damage of studied plaster is caused by the salts which are transported together with snow on cars undercarriage into garage area during winter. The snow melts and water with dissolved salts is transported by the capillary suction from concrete floor into the rendered partition wall. Based on the interior temperature, adsorbed water with dissolved chlorides evaporates and from the over saturated pore solution are formed salt crystals that damages the surface plaster layers. This damage would not occur if the partition wall was correctly isolated from the floor finish layer in the parking garage.
Borges, Cláudia Dos Santos; Fernandes, Luciane Fernanda Rodrigues Martinho; Bertoncello, Dernival
2013-05-01
: Evaluate the probable relationship among plantar arch, lumbar curvature, and low back pain. : Fifteen healthy women were assessed taking in account personal data and anthropometric measurements, photopodoscopic evaluation of the plantar arch, and biophotogrammetric postural analysis of the patient (both using the SAPO software), as well as evaluation of lumbar pain using a Visual Analog Scale (VAS). The average age of the participants was 30.45 (±6.25) years. : Of the feet evaluated, there were six individuals with flat feet, five with high arch, and four with normal feet. All reported algic syndrome in the lumbar spine, with the highest VAS values for the volunteers with high arch. Correlation was observed between the plantar arch and the angle of the lumbar spine (r = -0.71, p = 0.004) CONCLUSION: High arch was correlated with more intense algic syndrome, while there was moderate positive correlation between flat foot and increased lumbar curvature, and between high arch and lumbar correction. Level of Evidence IV. Case Series .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric, E-mail: mfroeb@itp.uni-leipzig.de, E-mail: enric.verdaguer@ub.edu
We derive the leading quantum corrections to the gravitational potentials in a de Sitter background, due to the vacuum polarization from loops of conformal fields. Our results are valid for arbitrary conformal theories, even strongly interacting ones, and are expressed using the coefficients b and b' appearing in the trace anomaly. Apart from the de Sitter generalization of the known flat-space results, we find two additional contributions: one which depends on the finite coefficients of terms quadratic in the curvature appearing in the renormalized effective action, and one which grows logarithmically with physical distance. While the first contribution corresponds tomore » a rescaling of the effective mass, the second contribution leads to a faster fall-off of the Newton potential at large distances, and is potentially measurable.« less
Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.
Turbulent shear stresses in compressible boundary layers
NASA Technical Reports Server (NTRS)
Laderman, A. J.; Demetriades, A.
1979-01-01
Hot-wire anemometer measurements of turbulent shear stresses in a Mach 3 compressible boundary layer were performed in order to investigate the effects of heat transfer on turbulence. Measurements were obtained by an x-probe in a flat plate, zero pressure gradient, two dimensional boundary layer in a wind tunnel with wall to freestream temperature ratios of 0.94 and 0.71. The measured shear stress distributions are found to be in good agreement with previous results, supporting the contention that the shear stress distribution is essentially independent of Mach number and heat transfer for Mach numbers from incompressible to hypersonic and wall to freestream temperature ratios of 0.4 to 1.0. It is also found that corrections for frequency response limitations of the electronic equipment are necessary to determine the correct shear stress distribution, particularly at the walls.
NASA Astrophysics Data System (ADS)
García-Ramos, Diego A.; Albano, Paolo G.; Harzhauser, Mathias; Piller, Werner E.; Zuschin, Martin
2016-04-01
Live-dead (LD) studies aim to help understand how faithfully fossil assemblages can be used to quantitatively infer the structure of the original living communities that generated them. To this purpose, LD comparisons have been conducted in different terrestrial and aquatic environments to assess how environment-specific differences in quality and intensity of taphonomic factors affect LD fidelity. In sub-tropical and tropical settings, most LD studies have focused on hard substrates or seagrass bottoms. Here we present results on molluscan assemblages from soft carbonate sediments in tidal flats of the Persian (Arabian) Gulf (Indo-West Pacific biogeographic province). We analyzed a total of 7193 mollusks collected from six sites comprising time-averaged death assemblages (DAs) and snapshot living assemblages (LAs). All analyses were performed at site and at habitat scales after correcting for sample-size differences. We found a good match in proportional abundance and a notable mismatch in species composition. In fact, species richness in DAs is 6 times larger than in LAs at site scale, and 4 times at habitat scale. Additionally, we found a good fidelity of evenness, and rank abundance of feeding guilds. Other studies have shown that molluscan DAs from subtidal carbonate environments can display lower time-averaging than those from siliciclastic environments due to high rates of shell loss to bioerosion and dissolution. For our case study of tidal flat carbonate settings, we interpret that despite temporal autocorrelation (good fidelity of proportional abundance), substantial differences in species richness and composition can be explained by early cementation, lateral mixing, intense bioturbation and moderate sedimentation rates. Our results suggest that tidal flat carbonate environments can potentially lead to a wider window of time-averaging in comparison with subtidal carbonate settings.
Drag measurements of an axisymmetric nacelle mounted on a flat plate at supersonic speeds
NASA Technical Reports Server (NTRS)
Flamm, Jeffrey D.; Wilcox, Floyd J., Jr.
1995-01-01
An experimental investigation was conducted to determine the effect of diverter wedge half-angle and nacelle lip height on the drag characteristics of an assembly consisting of a nacelle fore cowl from a typical high-speed civil transport (HSCT) and a diverter mounted on a flat plate. Data were obtained for diverter wedge half-angles of 4.0 deg, 6.0 deg, and 8.0 deg and ratios of the nacelle lip height above a flat plate to the boundary-layer thickness (h(sub n)/delta) of approximately 0.87 to 2.45. Limited drag data were also obtained on a complete nacelle/diverter configuration that included fore and aft cowls. Although the nacelle/diverter drag data were not corrected for base pressures or internal flow drag, the data are useful for comparing the relative drag of the configuration tested. The tests were conducted in the Langley Unitary Plan Wind Tunnel at Mach numbers of 1.50, 1.80, 2.10, and 2.40 and Reynolds numbers ranging from 2.00 x 10(exp 6) to 5.00 x 10(exp 6) per foot. The results of this investigation showed that the nacelle/diverter drag essentially increased linearly with increasing h(sub n)/delta except near 1.0 where the data showed a nonlinear behavior. This nonlinear behavior was probably caused by the interaction of the shock waves from the nacelle/diverter configuration with the flat-plate boundary layer. At the lowest h(sub n)/delta tested, the diverter wedge half-angle had virtually no effect on the nacelle/diverter drag. However, as h(sub n)/delta increased, the nacelle/diverter drag increased as diverter wedge half-angle increased.
2018-01-01
Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364
Wada, Junichiro; Hideshima, Masayuki; Inukai, Shusuke; Matsuura, Hiroshi; Wakabayashi, Noriyuki
2014-01-01
To investigate the effects of the width and cross-sectional shape of the major connectors of maxillary dentures located in the middle area of the palate on the accuracy of phonetic output of consonants using an originally developed speech recognition system. Nine adults (4 males and 5 females, aged 24-26 years) with sound dentition were recruited. The following six sounds were considered: [∫i], [t∫i], [ɾi], [ni], [çi], and [ki]. The experimental connectors were fabricated to simulate bars (narrow, 8-mm width) and plates (wide, 20-mm width). Two types of cross-sectional shapes in the sagittal plane were specified: flat and plump edge. The appearance ratio of phonetic segment labels was calculated with the speech recognition system to indicate the accuracy of phonetic output. Statistical analysis was conducted using one-way ANOVA and Tukey's test. The mean appearance ratio of correct labels (MARC) significantly decreased for [ni] with the plump edge (narrow connector) and for [ki] with both the flat and plump edge (wide connectors). For [çi], the MARCs tended to be lower with flat plates. There were no significant differences for the other consonants. The width and cross-sectional shape of the connectors had limited effects on the articulation of consonants at the palate. © 2015 S. Karger AG, Basel.
Nephrogenic adenoma of the urinary tract: A 6-year single center experience.
Turcan, Didem; Acikalin, Mustafa Fuat; Yilmaz, Evrim; Canaz, Funda; Arik, Deniz
2017-07-01
Nephrogenic adenoma is an uncommon benign lesion that occurs at several sites in urinary tract, from the renal pelvis to urethra, with the highest frequency in urinary bladder. Nephrogenic adenoma displays a broad spectrum of architectural and cytological features. Hence, recognition of its characteristic histopathological features is needed to distinguish this lesion from its mimickers. A retrospective series of 21 cases of nephrogenic adenoma in 18 patients, which were diagnosed in our department between 2010 and 2016, were analyzed. All histological slides were reviewed by two pathologists and the diagnosis of each case was confirmed. Immunohistochemistry was performed for PAX-8 in all cases. CK7, PAX-2, PSA, p53, p63, GATA-3 and α-methylacyl-CoA racemase (AMACR) were applied in problematic cases. The most common location of the lesion was urinary bladder (14 patients) followed by renal pelvis (2 patients), ureter (1 patient) and urethra (1 patient). A history of urothelial carcinoma and repeated TUR procedures were observed in 12 patients. There were 2 pediatric patients aged 3 years. Both of them had undergone previous urosurgery because of megaureter in one and bladder exstrophy in the other. Other clinical antecedents included bladder diverticulum (1 patient), cystitis (1 patient) and nephrolithiasis (1 patient). Recurrence of lesion was seen in two patients (once in one case and twice in the other one). The median time to disease recurrence in these patients was 11 months (range, 2-20 months). Histologically, the lesions exhibited various morphological findings, with mixed (15 cases, 71.4%), pure tubular (3 cases, 14.3%), pure papillary (2 cases, 9.5%) and pure flat (1 case, 4.8%) growth patterns. Of the 15 cases with mixed patterns, 8 cases were tubulocystic and flat, 3 cases were tubular and flat, 2 cases were tubular, papillary and flat, 1 case was tubulocystic, papillary and flat, and 1 case was tubular and papillary. Flat pattern was observed in 15 cases (71.4%). It was seen in association with other patterns in 14 cases (mixed morphology) and purely in 1 case. Our findings suggested that the flat pattern is a frequent finding in nephrogenic adenomas. Notably one case in this series showed superficial extension into bladder muscularis propria. Histologically nephrogenic adenoma may simulate a variety of malignancies. Awareness of characteristic morphologic features of nephrogenic adenoma is needed to diagnose this lesion correctly. Copyright © 2017 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Tao; Wang, Anzhong; Wu, Qiang
We first derive the primordial power spectra, spectral indices and runnings of both scalar and tensor perturbations of a flat inflationary universe to the second-order approximations of the slow-roll parameters, in the framework of loop quantum cosmology with the inverse-volume quantum corrections. This represents an extension of our previous work in which the parameter σ was assumed to be an integer, where σ characterizes the quantum corrections and in general can take any of values from the range σ element of (0, 6]. Restricting to the first-order approximations of the slow-roll parameters, we find corrections to the results obtained previously inmore » the literature, and point out the causes for such errors. To our best knowledge, these represent the most accurate calculations of scalar and tensor perturbations given so far in the literature. Then, fitting the perturbations to the recently released data by Planck (2015), we obtain the most severe constraints for various values of σ. Using these constraints as our referring point, we discuss whether these quantum gravitational corrections can lead to measurable signatures in the future cosmological observations. We show that, depending on the value of σ, the scale-dependent contributions to the relativistic inflationary spectra due to the inverse-volume corrections could be well within the range of the detectability of the forthcoming generations of experiments, such as the Stage IV experiments.« less
NASA Technical Reports Server (NTRS)
2004-01-01
This is the left-eye version of the 3-D cylindrical-perspective mosaic showing the view south of the martian crater dubbed 'Bonneville.' The image was taken by the navigation camera on the Mars Exploration Rover Spirit. The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.
On the effect of boundary layer growth on the stability of compressible flows
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1981-01-01
The method of multiple scales is used to describe a formally correct method based on the nonparallel linear stability theory, that examines the two and three dimensional stability of compressible boundary layer flows. The method is applied to the supersonic flat plate layer at Mach number 4.5. The theoretical growth rates are in good agreement with experimental results. The method is also applied to the infinite-span swept wing transonic boundary layer with suction to evaluate the effect of the nonparallel flow on the development of crossflow disturbances.
Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.
Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah
2016-06-01
Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. Copyright © 2016 Elsevier Inc. All rights reserved.
Alignment Algorithms and Per-Particle CTF Correction for Single Particle Cryo-Electron Tomography
Galaz-Montoya, Jesús G.; Hecksel, Corey W.; Baldwin, Philip R.; Wang, Eryu; Weaver, Scott C.; Schmid, Michael F.; Ludtke, Steven J.; Chiu, Wah
2016-01-01
Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen grid and carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. PMID:27016284
Pixel-based CTE Correction of ACS/WFC: Modifications To The ACS Calibration Pipeline (CALACS)
NASA Astrophysics Data System (ADS)
Smith, Linda J.; Anderson, J.; Armstrong, A.; Avila, R.; Bedin, L.; Chiaberge, M.; Davis, M.; Ferguson, B.; Fruchter, A.; Golimowski, D.; Grogin, N.; Hack, W.; Lim, P. L.; Lucas, R.; Maybhate, A.; McMaster, M.; Ogaz, S.; Suchkov, A.; Ubeda, L.
2012-01-01
The Advanced Camera for Surveys (ACS) was installed on the Hubble Space Telescope (HST) nearly ten years ago. Over the last decade, continuous exposure to the harsh radiation environment has degraded the charge transfer efficiency (CTE) of the CCDs. The worsening CTE impacts the science that can be obtained by altering the photometric, astrometric and morphological characteristics of sources, particularly those farthest from the readout amplifiers. To ameliorate these effects, Anderson & Bedin (2010, PASP, 122, 1035) developed a pixel-based empirical approach to correcting ACS data by characterizing the CTE profiles of trails behind warm pixels in dark exposures. The success of this technique means that it is now possible to correct full-frame ACS/WFC images for CTE degradation in the standard data calibration and reduction pipeline CALACS. Over the past year, the ACS team at STScI has developed, refined and tested the new software. The details of this work are described in separate posters. The new code is more effective at low flux levels (< 50 electrons) than the original Anderson & Bedin code, and employs a more accurate time and temperature dependence for CTE. The new CALACS includes the automatic removal of low-level bias stripes (produced by the post-repair ACS electronics) and pixel-based CTE correction. In addition to the standard cosmic ray corrected, flat-fielded and drizzled data products (crj, flt and drz files) there are three new equivalent files (crc, flc and drc) which contain the CTE-corrected data products. The user community will be able to choose whether to use the standard or CTE-corrected products.
Photometric Characterization of the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.; ...
2018-04-02
We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >10 7 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20''. On cloudless nights, any departures ofmore » the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6'' and 8'' diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. Here, we find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less
Pilanci, Ozgur; Basaran, Karaca; Aydin, Hasan Utkan; Cortuk, Oguz; Kuvat, Samet Vasfi
2015-03-01
Correction of gynecomastia in males is a frequently performed aesthetic procedure. Various surgical options involving the removal of excess skin, fat, or glandular tissue have been described. However, poor aesthetic outcomes, including a flat or depressed pectoral area, limit the success of these techniques. The authors sought to determine patient satisfaction with the results of upper chest augmentation by direct intrapectoral fat injection in conjunction with surgical correction of gynecomastia. In this prospective study, 26 patients underwent liposuction and glandular excision, glandular excision alone, or Benelli-type skin excision. All patients received intramuscular fat injections in predetermined zones of the pectoralis major (PM). The mean volume of fat injected was 160 mL (range, 80-220 mL per breast) bilaterally. Patients were monitored for an average of 16 months (range, 8-24 months). Hematoma formation and consequent infraareolar depression was noted in 1 patient and was corrected by secondary lipografting. Mean patient satisfaction was rated as 8.4 on a scale of 1 (unsatisfactory) to 10 (highly satisfactory). Autologous intrapectoral fat injection performed simultaneously with gynecomastia correction can produce a masculine appearance. The long-term viability of fat cells injected into the PM needs to be determined. 4 Therapeutic. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Astrometric Calibration and Performance of the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Armstrong, R.; Plazas, A. A.
2017-05-30
We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520~Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry ofmore » $>10^7$ stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the RMS variation in aperture magnitudes of bright stars on cloudless nights down to 2--3 mmag, with <1 mmag of correlated photometric errors for stars separated by $$\\ge20$$". On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding >1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6" and 8" diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2-degree field drifts over months by up to $$\\pm7$$ mmag, in a nearly-wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less
Photometric Characterization of the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.
We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >10 7 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20''. On cloudless nights, any departures ofmore » the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6'' and 8'' diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. Here, we find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.« less
Photometric Characterization of the Dark Energy Camera
NASA Astrophysics Data System (ADS)
Bernstein, G. M.; Abbott, T. M. C.; Armstrong, R.; Burke, D. L.; Diehl, H. T.; Gruendl, R. A.; Johnson, M. D.; Li, T. S.; Rykoff, E. S.; Walker, A. R.; Wester, W.; Yanny, B.
2018-05-01
We characterize the variation in photometric response of the Dark Energy Camera (DECam) across its 520 Mpix science array during 4 years of operation. These variations are measured using high signal-to-noise aperture photometry of >107 stellar images in thousands of exposures of a few selected fields, with the telescope dithered to move the sources around the array. A calibration procedure based on these results brings the rms variation in aperture magnitudes of bright stars on cloudless nights down to 2–3 mmag, with <1 mmag of correlated photometric errors for stars separated by ≥20″. On cloudless nights, any departures of the exposure zeropoints from a secant airmass law exceeding 1 mmag are plausibly attributable to spatial/temporal variations in aperture corrections. These variations can be inferred and corrected by measuring the fraction of stellar light in an annulus between 6″ and 8″ diameter. Key elements of this calibration include: correction of amplifier nonlinearities; distinguishing pixel-area variations and stray light from quantum-efficiency variations in the flat fields; field-dependent color corrections; and the use of an aperture-correction proxy. The DECam response pattern across the 2° field drifts over months by up to ±9 mmag, in a nearly wavelength-independent low-order pattern. We find no fundamental barriers to pushing global photometric calibrations toward mmag accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawicki, R.H.; Sweatt, W.
1987-03-03
An apparatus is described for correcting for astigmatism in a light beam reflected off of a light reflecting surface, comprising: (a) a first means defining a flat, rectangular light reflecting surface which is resiliently bendable, to a limited extent, into different concave and/or convex cylindrical curvatures about a particular axis. The first means is configured so that the light reflecting surface can be adjustably bent into the selected cylindrical curvature by applying a particular bending moment to the first means with respect to the surface, depending upon the curvature desired. The first means includes an integrally formed body member havingmore » a main plate-like segment including a front fact defining the light reflecting surface and a pair of spaced-apart flange segments extending rearwardly of the main segment; and (b) second means acting on the first means for adjustably bending the light reflecting surface into a particular selected one of the different cylindrical curvatures, depending upon the astigmatism to be corrected for.« less
The performance of the MROI fast tip-tilt correction system
NASA Astrophysics Data System (ADS)
Young, John; Buscher, David; Fisher, Martin; Haniff, Christopher; Rea, Alexander; Seneta, Eugene; Sun, Xiaowei; Wilson, Donald; Farris, Allen; Olivares, Andres
2014-07-01
The fast tip-tilt (FTT) correction system for the Magdalena Ridge Observatory Interferometer (MROI) is being developed by the University of Cambridge. The design incorporates an EMCCD camera protected by a thermal enclosure, optical mounts with passive thermal compensation, and control software running under Xenomai real-time Linux. The complete FTT system is now undergoing laboratory testing prior to being installed on the first MROI unit telescope in the fall of 2014. We are following a twin-track approach to testing the closed-loop performance: tracking tip-tilt perturbations introduced by an actuated flat mirror in the laboratory, and undertaking end-to-end simulations that incorporate realistic higher-order atmospheric perturbations. We report test results that demonstrate (a) the high stability of the entire opto-mechanical system, realized with a completely passive design; and (b) the fast tip-tilt correction performance and limiting sensitivity. Our preliminary results in both areas are close to those needed to realise the ambitious stability and sensitivity goals of the MROI which aims to match the performance of current natural guide star adaptive optics systems.
Inflation, symmetry, and B-modes
Hertzberg, Mark P.
2015-04-20
Here, we examine the role of using symmetry and effective field theory in inflationary model building. We describe the standard formulation of starting with an approximate shift symmetry for a scalar field, and then introducing corrections systematically in order to maintain control over the inflationary potential. We find that this leads to models in good agreement with recent data. On the other hand, there are attempts in the literature to deviate from this paradigm by envoking other symmetries and corrections. In particular: in a suite of recent papers, several authors have made the claim that standard Einstein gravity with amore » cosmological constant and a massless scalar carries conformal symmetry. They claim this conformal symmetry is hidden when the action is written in the Einstein frame, and so has not been fully appreciated in the literature. They further claim that such a theory carries another hidden symmetry; a global SO(1,1) symmetry. By deforming around the global SO(1,1) symmetry, they are able to produce a range of inflationary models with asymptotically flat potentials, whose flatness is claimed to be protected by these symmetries. These models tend to give rise to B-modes with small amplitude. Here we explain that standard Einstein gravity does not in fact possess conformal symmetry. Instead these authors are merely introducing a redundancy into the description, not an actual conformal symmetry. Furthermore, we explain that the only real (global) symmetry in these models is not at all hidden, but is completely manifest when expressed in the Einstein frame; it is in fact the shift symmetry of a scalar field. When analyzed systematically as an effective field theory, deformations do not generally produce asymptotically flat potentials and small B-modes as suggested in these recent papers. Instead, deforming around the shift symmetry systematically, tends to produce models of inflation with B-modes of appreciable amplitude. Such simple models typically also produce the observed red spectral index, Gaussian fluctuations, etc. In short: simple models of inflation, organized by expanding around a shift symmetry, are in excellent agreement with recent data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
NASA Technical Reports Server (NTRS)
Ustin, S. L.; Rock, B. N.
1985-01-01
Spectral characteristics of semic-arid plant communities using 128 channel airborne imaging spectrometer (AIS) data acquired on October 30, 1984. Both field and AIS spectra of vegetation were relatively featureless and differed from substrate spectra primarily in albedo. Unvegetated sand dunes were examined to assess spectral variation resulting from topographic irregularity. Although shrub cover as low as 10% could be detected on relatively flat surfaces, such differences were obscured in more heterogeneous terrain. Sagebrush-covered fans which had been scarred by fire were studied to determine the effect of changes in plant density on reflectance. Despite noise in the atmospherically corrected spectra, these provide better resolution of differences in plant density than spectra which are solar-corrected only. A high negative correlation was found between reflectance and plant cover in areas which had uniform substrates and vegetation types. A lower correlation was found where vegetation and substrates were more diverse.
NASA Technical Reports Server (NTRS)
1994-01-01
Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.
Finite temperature corrections to tachyon mass in intersecting D-branes
NASA Astrophysics Data System (ADS)
Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu
2017-04-01
We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
A superstring field theory for supergravity
NASA Astrophysics Data System (ADS)
Reid-Edwards, R. A.; Riccombeni, D. A.
2017-09-01
A covariant closed superstring field theory, equivalent to classical tendimensional Type II supergravity, is presented. The defining conformal field theory is the ambitwistor string worldsheet theory of Mason and Skinner. This theory is known to reproduce the scattering amplitudes of Cachazo, He and Yuan in which the scattering equations play an important role and the string field theory naturally incorporates these results. We investigate the operator formalism description of the ambitwsitor string and propose an action for the string field theory of the bosonic and supersymmetric theories. The correct linearised gauge symmetries and spacetime actions are explicitly reproduced and evidence is given that the action is correct to all orders. The focus is on the NeveuSchwarz sector and the explicit description of tree level perturbation theory about flat spacetime. Application of the string field theory to general supergravity backgrounds and the inclusion of the Ramond sector are briefly discussed.
Large-scale 3D galaxy correlation function and non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele
We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less
Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements
NASA Technical Reports Server (NTRS)
Conel, J. E.
1985-01-01
Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.
Imaging characteristics of the Extreme Ultraviolet Explorer microchannel plate detectors
NASA Technical Reports Server (NTRS)
Vallerga, J. V.; Kaplan, G. C.; Siegmund, O. H. W.; Lampton, M.; Malina, R. F.
1989-01-01
The Extreme Ultraviolet Explorer (EUVE) satellite will conduct an all-sky survey over the wavelength range from 70 A to 760 A using four grazing-incidence telescopes and seven microchannel-plate (MCP) detectors. The imaging photon-counting MCP detectors have active areas of 19.6 cm2. Photon arrival position is determined using a wedge-and-strip anode and associated pulse-encoding electronics. The imaging characteristics of the EUVE flight detectors are presented including image distortion, flat-field response, and spatial differential nonlinearity. Also included is a detailed discussion of image distortions due to the detector mechanical assembly, the wedge-and-strip anode, and the electronics. Model predictions of these distortions are compared to preflight calibration images which show distortions less than 1.3 percent rms of the detector diameter of 50 mm before correction. The plans for correcting these residual detector image distortions to less than 0.1 percent rms are also presented.
Cosmological perturbations in teleparallel Loop Quantum Cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haro, Jaime, E-mail: jaime.haro@upc.edu
2013-11-01
Cosmological perturbations in Loop Quantum Cosmology (LQC) are usually studied incorporating either holonomy corrections, where the Ashtekar connection is replaced by a suitable sinus function in order to have a well-defined quantum analogue, or inverse-volume corrections coming from the eigenvalues of the inverse-volume operator. In this paper we will develop an alternative approach to calculate cosmological perturbations in LQC based on the fact that, holonomy corrected LQC in the flat Friedmann-Lemaître-Robertson-Walker (FLRW) geometry could be also obtained as a particular case of teleparallel F(T) gravity (teleparallel LQC). The main idea of our approach is to mix the simple bounce providedmore » by holonomy corrections in LQC with the non-singular perturbation equations given by F(T) gravity, in order to obtain a matter bounce scenario as a viable alternative to slow-roll inflation. In our study, we have obtained an scale invariant power spectrum of cosmological perturbations. However, the ratio of tensor to scalar perturbations is of order 1, which does not agree with the current observations. For this reason, we suggest a model where a transition from the matter domination to a quasi de Sitter phase is produced in order to enhance the scalar power spectrum.« less
Su, Shonglun; Mo, Zhongjun; Guo, Junchao
2017-01-01
Flat foot is one of the common deformities in the youth population, seriously affecting the weight supporting and daily exercising. However, there is lacking of quantitative data relative to material selection and shape design of the personalized orthopedic insole. This study was to evaluate the biomechanical effects of material hardness and support height of personalized orthopedic insole on foot tissues, by in vivo experiment and finite element modeling. The correction of arch height increased with material hardness and support height. The peak plantar pressure increased with the material hardness, and these values by wearing insoles of 40° were apparently higher than the bare feet condition. Harder insole material results in higher stress in the joint and ligament stress than softer material. In the calcaneocuboid joint, the stress increased with the arch height of insoles. The material hardness did not apparently affect the stress in the ankle joints, but the support heights of insole did. In general, insole material and support design are positively affecting the correction of orthopedic insole, but negatively resulting in unreasonable stress on the stress in the joint and ligaments. There should be an integration of improving correction and reducing stress in foot tissues. PMID:29065655
Development of hybrid fluid jet/float polishing process
NASA Astrophysics Data System (ADS)
Beaucamp, Anthony T. H.; Namba, Yoshiharu; Freeman, Richard R.
2013-09-01
On one hand, the "float polishing" process consists of a tin lap having many concentric grooves, cut from a flat by single point diamond turning. This lap is rotated above a hydrostatic bearing spindle of high rigidity, damping and rotational accuracy. The optical surface thus floats above a thin layer of abrasive particles. But whilst surface texture can be smoothed to ~0.1nm rms (as measured by atomic force microscopy), this process can only be used on flat surfaces. On the other hand, the CNC "fluid jet polishing" process consists of pumping a mixture of water and abrasive particles to a converging nozzle, thus generating a polishing spot that can be moved along a tool path with tight track spacing. But whilst tool path feed can be moderated to ultra-precisely correct form error on freeform optical surfaces, surface finish improvement is generally limited to ~1.5nm rms (with fine abrasives). This paper reports on the development of a novel finishing method, that combines the advantages of "fluid jet polishing" (i.e. freeform corrective capability) with "float polishing" (i.e. super-smooth surface finish of 0.1nm rms or less). To come up with this new "hybrid" method, computational fluid dynamic modeling of both processes in COMSOL is being used to characterize abrasion conditions and adapt the process parameters of experimental fluid jet polishing equipment, including: (1) geometrical shape of nozzle, (2) position relative to the surface, (3) control of inlet pressure. This new process is aimed at finishing of next generation X-Ray / Gamma Ray focusing optics.
Prell, D; Kalender, W A; Kyriakou, Y
2010-12-01
The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.
Deep resistivity structure of Yucca Flat, Nevada Test Site, Nevada
Asch, Theodore H.; Rodriguez, Brian D.; Sampson, Jay A.; Wallin, Erin L.; Williams, Jackie M.
2006-01-01
The Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office are addressing groundwater contamination resulting from historical underground nuclear testing through the Environmental Management program and, in particular, the Underground Test Area project. One issue of concern is the nature of the somewhat poorly constrained pre Tertiary geology and its effects on ground-water flow in the area adjacent to a nuclear test. Ground water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey, supported by the DOE and NNSA-NSO, collected and processed data from 51 magnetotelluric (MT) and audio-magnetotelluric (AMT) stations at the Nevada Test Site in and near Yucca Flat to assist in characterizing the pre-Tertiary geology in that area. The primary purpose was to refine the character, thickness, and lateral extent of pre Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (late Devonian - Mississippian-age siliciclastic rocks assigned to the Eleana Formation and Chainman Shale) in the Yucca Flat area. The MT and AMT data have been released in separate USGS Open File Reports. The Nevada Test Site magnetotelluric data interpretation presented in this report includes the results of detailed two-dimensional (2 D) resistivity modeling for each profile (including alternative interpretations) and gross inferences on the three dimensional (3 D) character of the geology beneath each station. The character, thickness, and lateral extent of the Chainman Shale and Eleana Formation that comprise the Upper Clastic Confining Unit are generally well determined in the upper 5 km. Inferences can be made regarding the presence of the Lower Clastic Confining Unit at depths below 5 km. Large fault structures such as the CP Thrust fault, the Carpetbag fault, and the Yucca fault that cross Yucca Flat are also discernable as are other smaller faults. The subsurface electrical resistivity distribution and inferred geologic structures determined by this investigation should help constrain the hydrostratigraphic framework model that is under development.
SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Gopal, A; Yan, G
2015-06-15
Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad pixels and inter-subpanel flat fielding artifacts.« less
NASA Astrophysics Data System (ADS)
Ekberg, Peter; Stiblert, Lars; Mattsson, Lars
2014-05-01
High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels2, is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200-300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m2. When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8-16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y-directions. Errors due to this deformation can easily be several hundred nanometers. When Z-correction is used in the writer, it is also possible to relax the flatness demand of the photomask backside, leading to reduced manufacturing costs of the plates.
Processing Sentinel-2 data with ATCOR
NASA Astrophysics Data System (ADS)
Pflug, Bringfried; Makarau, Aliaksei; Richter, Rudolf
2016-04-01
Atmospheric correction of satellite images is necessary for many applications of remote sensing. Among them are applications for agriculture, forestry, land cover and land cover change, urban mapping, emergency and inland water. ATCOR is a widely used atmospheric correction tool which can process data of many optical satellite sensors, for instance Landsat, Sentinel-2, SPOT and RapidEye. ATCOR includes a terrain and adjacency correction of satellite images and several special algorithms like haze detection, haze correction, cirrus correction, de-shadowing and empirical methods for BRDF correction. The atmospheric correction tool ATCOR starts with an estimation of the vertical column Aerosol Optical Thickness (AOT550) at 550 nm. The mean uncertainty of the ATCOR-AOT550-estimation was estimated using Landsat and RapidEye data by direct comparison with sunphotometer data as a reference. For Landsat and RapidEye the uncertainty is ΔAOT550nm ≈ 0.03±0.02 for cloudless conditions with a cloud+haze fraction below 1%. Inclusion of cloudy and hazy satellite images into the analysis results in mean ΔAOT550nm ≈ 0.04±0.03 for both RapidEye and Landsat imagery. About 1/3 of the samples perform with the AOT uncertainty better than 0.02 and about 2/3 perform with AOT uncertainty better than 0.05. An accuracy of the retrieved surface reflectance of ±2% (for reflectance <10%) and ±4% reflectance units (for reflectance > 40%) can be achieved for flat terrain, and avoiding the specular and backscattering regions. ATCOR also supports the processing of Sentinel-2 data. First results of processing S2 data and a comparison with AERONET AOT values will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick Matthews
2012-10-01
CAU 104 comprises the following corrective action sites (CASs): • 07-23-03, Atmospheric Test Site T-7C • 07-23-04, Atmospheric Test Site T7-1 • 07-23-05, Atmospheric Test Site • 07-23-06, Atmospheric Test Site T7-5a • 07-23-07, Atmospheric Test Site - Dog (T-S) • 07-23-08, Atmospheric Test Site - Baker (T-S) • 07-23-09, Atmospheric Test Site - Charlie (T-S) • 07-23-10, Atmospheric Test Site - Dixie • 07-23-11, Atmospheric Test Site - Dixie • 07-23-12, Atmospheric Test Site - Charlie (Bus) • 07-23-13, Atmospheric Test Site - Baker (Buster) • 07-23-14, Atmospheric Test Site - Ruth • 07-23-15, Atmospheric Test Site T7-4 •more » 07-23-16, Atmospheric Test Site B7-b • 07-23-17, Atmospheric Test Site - Climax These 15 CASs include releases from 30 atmospheric tests conducted in the approximately 1 square mile of CAU 104. Because releases associated with the CASs included in this CAU overlap and are not separate and distinguishable, these CASs are addressed jointly at the CAU level. The purpose of this CADD/CAP is to evaluate potential corrective action alternatives (CAAs), provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 104. Corrective action investigation (CAI) activities were performed from October 4, 2011, through May 3, 2012, as set forth in the CAU 104 Corrective Action Investigation Plan.« less
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Stabilizing all geometric moduli in heterotic Calabi-Yau vacua
Anderson, Lara B.; Gray, James; Lukas, Andre; ...
2011-05-27
We propose a scenario to stabilize all geometric moduli - that is, the complex structure, Kähler moduli and the dilaton - in smooth heterotic Calabi-Yau compactifications without Neveu-Schwarz three-form flux. This is accomplished using the gauge bundle required in any heterotic compactification, whose perturbative effects on the moduli are combined with non-perturbative corrections. We argue that, for appropriate gauge bundles, all complex structure and a large number of other moduli can be perturbatively stabilized - in the most restrictive case, leaving only one combination of Kähler moduli and the dilaton as a flat direction. At this stage, the remaining modulimore » space consists of Minkowski vacua. That is, the perturbative superpotential vanishes in the vacuum without the necessity to fine-tune flux. Finally, we incorporate non-perturbative effects such as gaugino condensation and/or instantons. These are strongly constrained by the anomalous U(1) symmetries which arise from the required bundle constructions. We present a specific example, with a consistent choice of non-perturbative effects, where all remaining flat directions are stabilized in an AdS vacuum.« less
Borges, Cláudia dos Santos; Fernandes, Luciane Fernanda Rodrigues Martinho; Bertoncello, Dernival
2013-01-01
OBJECTIVE : Evaluate the probable relationship among plantar arch, lumbar curvature, and low back pain. METHODS : Fifteen healthy women were assessed taking in account personal data and anthropometric measurements, photopodoscopic evaluation of the plantar arch, and biophotogrammetric postural analysis of the patient (both using the SAPO software), as well as evaluation of lumbar pain using a Visual Analog Scale (VAS). The average age of the participants was 30.45 (±6.25) years. RESULTS : Of the feet evaluated, there were six individuals with flat feet, five with high arch, and four with normal feet. All reported algic syndrome in the lumbar spine, with the highest VAS values for the volunteers with high arch. Correlation was observed between the plantar arch and the angle of the lumbar spine (r = -0.71, p = 0.004) Conclusion: High arch was correlated with more intense algic syndrome, while there was moderate positive correlation between flat foot and increased lumbar curvature, and between high arch and lumbar correction. Level of Evidence IV. Case Series. PMID:24453656
Interferometric surface mapping with variable sensitivity.
Jaerisch, W; Makosch, G
1978-03-01
In the photolithographic process, presently employed for the production of integrated circuits, sets of correlated masks are used for exposing the photoresist on silicon wafers. Various sets of masks which are printed in different printing tools must be aligned correctly with respect to the structures produced on the wafer in previous process steps. Even when perfect alignment is considered, displacements and distortions of the printed wafer patterns occur. They are caused by imperfections of the printing tools or/and wafer deformations resulting from high temperature processes. Since the electrical properties of the final integrated circuits and therefore the manufacturing yield depend to a great extent on the precision at which such patterns are superimposed, simple and fast overlay measurements and flatness measurements as well are very important in IC-manufacturing. A simple optical interference method for flatness measurements will be described which can be used under manufacturing conditions. This method permits testing of surface height variations by nearly grazing light incidence by absence of a physical reference plane. It can be applied to polished surfaces and rough surfaces as well.
Beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz radio frequency quadrupole accelerator
NASA Astrophysics Data System (ADS)
Gaur, Rahul; Kumar, Vinit
2018-05-01
We present the beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz H- radio frequency quadrupole (RFQ) accelerator for the proposed Indian Spallation Neutron Source project. We have followed a design approach, where the emittance growth and the losses are minimized by keeping the tune depression ratio larger than 0.5. The transverse cross-section of RFQ is designed at a frequency lower than the operating frequency, so that the tuners have their nominal position inside the RFQ cavity. This has resulted in an improvement of the tuning range, and the efficiency of tuners to correct the field errors in the RFQ. The vane-tip modulations have been modelled in CST-MWS code, and its effect on the field flatness and the resonant frequency has been studied. The deterioration in the field flatness due to vane-tip modulations is reduced to an acceptable level with the help of tuners. Details of the error study and the higher order mode study along with mode stabilization technique are also described in the paper.
Gating geometry studies of thin-walled 17-4PH investment castings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maguire, M.C.; Zanner, F.J.
1992-11-01
The ability to design gating systems that reliably feed and support investment castings is often the result of ``cut-and-try`` methodology. Factors such as hot tearing, porosity, cold shuts, misruns, and shrink are defects often corrected by several empirical gating design iterations. Sandia National Laboratories is developing rules that aid in removing the uncertainty involved in the design of gating systems for investment castings. In this work, gating geometries used for filling of thin walled investment cast 17-4PH stainless steel flat plates were investigated. A full factorial experiment evaluating the influence of metal pour temperature, mold preheat temperature, and mold channelmore » thickness were conducted for orientations that filled a horizontal flat plate from the edge. A single wedge gate geometry was used for the edge-gated configuration. Thermocouples placed along the top of the mold recorded metal front temperatures, and a real-time x-ray imaging system tracked the fluid flow behavior during filling of the casting. Data from these experiments were used to determine the terminal fill volumes and terminal fill times for each gate design.« less
Gating geometry studies of thin-walled 17-4PH investment castings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maguire, M.C.; Zanner, F.J.
1992-01-01
The ability to design gating systems that reliably feed and support investment castings is often the result of cut-and-try'' methodology. Factors such as hot tearing, porosity, cold shuts, misruns, and shrink are defects often corrected by several empirical gating design iterations. Sandia National Laboratories is developing rules that aid in removing the uncertainty involved in the design of gating systems for investment castings. In this work, gating geometries used for filling of thin walled investment cast 17-4PH stainless steel flat plates were investigated. A full factorial experiment evaluating the influence of metal pour temperature, mold preheat temperature, and mold channelmore » thickness were conducted for orientations that filled a horizontal flat plate from the edge. A single wedge gate geometry was used for the edge-gated configuration. Thermocouples placed along the top of the mold recorded metal front temperatures, and a real-time x-ray imaging system tracked the fluid flow behavior during filling of the casting. Data from these experiments were used to determine the terminal fill volumes and terminal fill times for each gate design.« less
Beam uniformity of flat top lasers
NASA Astrophysics Data System (ADS)
Chang, Chao; Cramer, Larry; Danielson, Don; Norby, James
2015-03-01
Many beams that output from standard commercial lasers are multi-mode, with each mode having a different shape and width. They show an overall non-homogeneous energy distribution across the spot size. There may be satellite structures, halos and other deviations from beam uniformity. However, many scientific, industrial and medical applications require flat top spatial energy distribution, high uniformity in the plateau region, and complete absence of hot spots. Reliable standard methods for the evaluation of beam quality are of great importance. Standard methods are required for correct characterization of the laser for its intended application and for tight quality control in laser manufacturing. The International Organization for Standardization (ISO) has published standard procedures and definitions for this purpose. These procedures have not been widely adopted by commercial laser manufacturers. This is due to the fact that they are unreliable because an unrepresentative single-pixel value can seriously distort the result. We hereby propose a metric of beam uniformity, a way of beam profile visualization, procedures to automatically detect hot spots and beam structures, and application examples in our high energy laser production.
[History and Technique of Epidural Anaesthesia].
Waurick, Katrin; Waurick, René
2015-07-01
In 1901, the first Epidural anesthesia via a caudal approach was independently described by two FrenchmanJean-Anthanase Sicard and Fernand Cathelin.. The Spanish military surgeon, Fidel Pagés Miravé, completed the lumbar approach successfully in 1921. The two possibilities for identification of the epidural space the "loss of resistance" technique and the technique of the "hanging drop" were developed by Achille Mario Dogliotti, an Italian, and Alberto Gutierrez, an Argentinean physician, at the same time. In 1956 John J. Bonica published the paramedian approach to the epidural space. As early as 1931 Eugene Aburel, a Romanian obstetrician, injected local anaesthetics via a silk catheter to perform lumbar obstetric Epidural analgesia. In 1949 the first successful continuous lumbar Epidural anaesthesia was reported by Manuel Martinez Curbelo, a Cuban. Epidural anaesthesia can be performed in sitting or lateral position in all segments of the spinal column via the median or paramedian approach. Different off-axis angles pose the challenge in learning the technique. © Georg Thieme Verlag Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.
2010-01-01
The NASA Moderate Resolution Imaging Spectroradiometer onboard the Aqua platform (MODIS-Aqua) provides a viable data stream for operational water quality monitoring of Chesapeake Bay. Marine geophysical products from MODIS-Aqua depend on the efficacy of the atmospheric correction process, which can be problematic in coastal environments. The operational atmospheric correction algorithm for MODIS-Aqua requires an assumption of negligible near-infrared water-leaving radiance, nL(sub w)(NIR). This assumption progressively degrades with increasing turbidity and, as such, methods exist to account for non-negligible nL(sub w)(NIR) within the atmospheric correction process or to use alternate radiometric bands where the assumption is satisfied, such as those positioned within shortwave infrared (SWIR) region of the spectrum. We evaluated a decade-long time-series of nL(sub w)(lambda) from MODIS-Aqua in Chesapeake Bay derived using NIR and SWIR bands for atmospheric correction. Low signal-to-noise ratios (SNR) for the SWIR bands of MODIS-Aqua added noise errors to the derived radiances, which produced broad, flat frequency distributions of nL(sub w)(lambda) relative to those produced using the NIR bands. The SWIR approach produced an increased number of negative nL(sub w)(lambda) and decreased sample size relative to the NIR approach. Revised vicarious calibration and regional tuning of the scheme to switch between the NIR and SWIR approaches may improve retrievals in Chesapeake Bay, however, poor SNR values for the MODIS-Aqua SWIR bands remain the primary deficiency of the SWIR-based atmospheric correction approach.
Complete super-sample lensing covariance in the response approach
NASA Astrophysics Data System (ADS)
Barreira, Alexandre; Krause, Elisabeth; Schmidt, Fabian
2018-06-01
We derive the complete super-sample covariance (SSC) of the matter and weak lensing convergence power spectra using the power spectrum response formalism to accurately describe the coupling of super- to sub-survey modes. The SSC term is completely characterized by the survey window function, the nonlinear matter power spectrum and the full first-order nonlinear power spectrum response function, which describes the response to super-survey density and tidal field perturbations. Generalized separate universe simulations can efficiently measure these responses in the nonlinear regime of structure formation, which is necessary for lensing applications. We derive the lensing SSC formulae for two cases: one under the Limber and flat-sky approximations, and a more general one that goes beyond the Limber approximation in the super-survey mode and is valid for curved sky applications. Quantitatively, we find that for sky fractions fsky ≈ 0.3 and a single source redshift at zS=1, the use of the flat-sky and Limber approximation underestimates the total SSC contribution by ≈ 10%. The contribution from super-survey tidal fields to the lensing SSC, which has not been included in cosmological analyses so far, is shown to represent about 5% of the total lensing covariance on multipoles l1,l2 gtrsim 300. The SSC is the dominant off-diagonal contribution to the total lensing covariance, making it appropriate to include these tidal terms and beyond flat-sky/Limber corrections in cosmic shear analyses.
Kriging: Understanding allays intimidation
Olea, R.A.
1996-01-01
In 1938 Daniel Gerhardus "Danie" Krige obtained an undergraduate degree in mining engineering and started a brilliant career centered on analyzing the gold and uranium mines in the Witwatersrand conglomerates of South Africa. He became interested in the disharmony between the poor reliability of reserve estimation reports and the magnitude of the economic decisions that were based on these studies. Back at the University of Witwatersrand, he wrote a master's thesis that began a revolution in mining evaluation methods. Krige was not alone in his research. Another mining engineer, Georges Matheron, a Frenchman, thought space data analysis belonged in a separate discipline, just as geophysics is a separate branch from physics. He named the new field geostatistics. Kriging is the name given in geostatistics to a collection of generalized linear regression techniques for the estimation of spatial phenomena. Pierre Carlier, another Frenchman, coined the term krigeage in the late 1950s to honor Krige's seminal work. Matheron anglicized the term to kriging when he published a paper for English-speaking readers. France dominated the development and application of geostatistics for several years. However, geostatistics in general, and kriging in particular, are employed by few and are regarded with apprehension by many. One of the possible applications of kriging is in computer mapping. Computer contouring methods can be grouped into two families: triangulation and gridding. The former is a direct procedure in which the contour lines are computed straight from the data by partitioning the sampling area into triangles with one observation per vertex. Kriging belongs in the gridding family. A grid is a regular arrangement of locations or nodes. In the gridding method the isolines are determined from interpolated values at the nodes. The difference between kriging and other weighting methods is in the calculation of the weights. Even for the simplest form of kriging, the calculations are more demanding. The kriging system of equations differs from classical regression in that the observations are allowed to be correlated and that neither the estimate nor the observations are necessarily points - they may have a volume, shape, and orientation. The mean square error is the average of the squares of the differences between the true and the estimated values. Simple kriging, the most basic form of kriging in that the system of equations has the fewest terms, requires the phenomena to have a constant and known mean. The next step up, ordinary kriging, does not require knowledge of the population mean. The external drift method, universal kriging, and intrinsic kriging go even further by allowing fluctuations in the mean. In practice, estimation by kriging is not as difficult to handle as it may look at first glance. In these days of high technology, all the details in the procedure are coded into computer programs. When properly used, kriging has several appealing attributes, the most important being that it does the work more accurately. By design, kriging provides the weights that result in the minimum mean square error. And yes, there have been people who have tested its superiority with real data. Practice has consistently confirmed theory. Kriging is also robust. Within reasonable limits, kriging tends to persist in yielding correct estimates even when the user selects the wrong model, misspecifies parameters, or both. This property should be an incentive for the novice to try the method. Gross misuse of kriging, though, can lead to poor results, worse even than those produced by alternative methods. Kriging has evolved and continues to expand to accommodate the estimation of increasingly demanding realities. Conclusions Theory and practice show that computer contour maps generated using kriging have the least mean square estimation error. In addition, the method provides information to assess the reliability of the maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamio, Y; Bouchard, H
2014-06-15
Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less
Electromagnetic fields with vanishing quantum corrections
NASA Astrophysics Data System (ADS)
Ortaggio, Marcello; Pravda, Vojtěch
2018-04-01
We show that a large class of null electromagnetic fields are immune to any modifications of Maxwell's equations in the form of arbitrary powers and derivatives of the field strength. These are thus exact solutions to virtually any generalized classical electrodynamics containing both non-linear terms and higher derivatives, including, e.g., non-linear electrodynamics as well as QED- and string-motivated effective theories. This result holds not only in a flat or (anti-)de Sitter background, but also in a larger subset of Kundt spacetimes, which allow for the presence of aligned gravitational waves and pure radiation.
2008-12-01
The effective two-way tactical data rate is 3,060 bits per second. Note that there is no parity check or forward error correction (FEC) coding used in...of 1800 bits per second. With the use of FEC coding , the channel data rate is 2250 bits per second; however, the information data rate is still the...Link-11. If the parity bits are included, the channel data rate is 28,800 bps. If FEC coding is considered, the channel data rate is 59,520 bps
An abbreviated Reynolds stress turbulence model for airfoil flows
NASA Technical Reports Server (NTRS)
Gaffney, R. L., Jr.; Hassan, H. A.; Salas, M. D.
1990-01-01
An abbreviated Reynolds stress turbulence model is presented for solving turbulent flow over airfoils. The model consists of two partial differential equations, one for the Reynolds shear stress and the other for the turbulent kinetic energy. The normal stresses and the dissipation rate of turbulent kinetic energy are computed from algebraic relationships having the correct asymptotic near wall behavior. This allows the model to be integrated all the way to the wall without the use of wall functions. Results for a flat plate at zero angle of attack, a NACA 0012 airfoil and a RAE 2822 airfoil are presented.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Papell, S. S.
1973-01-01
General formulas are derived for determining gage averaging errors of strip-type heat flux meters used in the measurement of one-dimensional heat flux distributions. In addition, a correction procedure is presented which allows a better estimate for the true value of the local heat flux. As an example of the technique, the formulas are applied to the cases of heat transfer to air slot jets impinging on flat and concave surfaces. It is shown that for many practical problems, the use of very small heat flux gages is often unnecessary.
Growth behavior of surface cracks in the circumferential plane of solid and hollow cylinders
NASA Technical Reports Server (NTRS)
Forman, R. G.; Shivakumar, V.
1986-01-01
Experiments were conducted to study the growth behavior of surface fatigue cracks in the circumferential plane of solid and hollow cylinders. In the solid cylinders, the fatigue cracks were found to have a circular arc crack front with specific upper and lower limits to the arc radius. In the hollow cylinders, the fatigue cracks were found to agree accurately with the shape of a transformed semiellipse. A modification to the usual nondimensionalization expression used for surface flaws in flat plates was found to give correct trends for the hollow cylinder problem.
Derivative expansion of one-loop effective energy of stiff membranes with tension
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Kleinert, H.; Schakel, Adriaan M. J.
1999-03-01
With help of a derivative expansion, the one-loop corrections to the energy functional of a nearly flat, stiff membrane with tension due to thermal fluctuations are calculated in the Monge parametrization. Contrary to previous studies, an arbitrary tilt of the surface is allowed to exhibit the nontrivial relations between the different, highly nonlinear terms accompanying the ultraviolet divergences. These terms are shown to have precisely the same form as those in the original energy functional, as necessary for renormalizability. Also infrared divergences arise. These, however, are shown to cancel in a nontrivial way.
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Histogram-driven cupping correction (HDCC) in CT
NASA Astrophysics Data System (ADS)
Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.
2010-04-01
Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.
Ankle and hip postural strategies defined by joint torques
NASA Technical Reports Server (NTRS)
Runge, C. F.; Shupert, C. L.; Horak, F. B.; Zajac, F. E.; Peterson, B. W. (Principal Investigator)
1999-01-01
Previous studies have identified two discrete strategies for the control of posture in the sagittal plane based on EMG activations, body kinematics, and ground reaction forces. The ankle strategy was characterized by body sway resembling a single-segment-inverted pendulum and was elicited on flat support surfaces. In contrast, the hip strategy was characterized by body sway resembling a double-segment inverted pendulum divided at the hip and was elicited on short or compliant support surfaces. However, biomechanical optimization models have suggested that hip strategy should be observed in response to fast translations on a flat surface also, provided the feet are constrained to remain in contact with the floor and the knee is constrained to remain straight. The purpose of this study was to examine the experimental evidence for hip strategy in postural responses to backward translations of a flat support surface and to determine whether analyses of joint torques would provide evidence for two separate postural strategies. Normal subjects standing on a flat support surface were translated backward with a range of velocities from fast (55 cm/s) to slow (5 cm/s). EMG activations and joint kinematics showed pattern changes consistent with previous experimental descriptions of mixed hip and ankle strategy with increasing platform velocity. Joint torque analyses revealed the addition of a hip flexor torque to the ankle plantarflexor torque during fast translations. This finding indicates the addition of hip strategy to ankle strategy to produce a continuum of postural responses. Hip torque without accompanying ankle torque (pure hip strategy) was not observed. Although postural control strategies have previously been defined by how the body moves, we conclude that joint torques, which indicate how body movements are produced, are useful in defining postural control strategies. These results also illustrate how the biomechanics of the body can transform discrete control patterns into a continuum of postural corrections.
[Design method of convex master gratings for replicating flat-field concave gratings].
Zhou, Qian; Li, Li-Feng
2009-08-01
Flat-field concave diffraction grating is the key device of a portable grating spectrometer with the advantage of integrating dispersion, focusing and flat-field in a single device. It directly determines the quality of a spectrometer. The most important two performances determining the quality of the spectrometer are spectral image quality and diffraction efficiency. The diffraction efficiency of a grating depends mainly on its groove shape. But it has long been a problem to get a uniform predetermined groove shape across the whole concave grating area, because the incident angle of the ion beam is restricted by the curvature of the concave substrate, and this severely limits the diffraction efficiency and restricts the application of concave gratings. The authors present a two-step method for designing convex gratings, which are made holographically with two exposure point sources placed behind a plano-convex transparent glass substrate, to solve this problem. The convex gratings are intended to be used as the master gratings for making aberration-corrected flat-field concave gratings. To achieve high spectral image quality for the replicated concave gratings, the refraction effect at the planar back surface and the extra optical path lengths through the substrate thickness experienced by the two divergent recording beams are considered during optimization. This two-step method combines the optical-path-length function method and the ZEMAX software to complete the optimization with a high success rate and high efficiency. In the first step, the optical-path-length function method is used without considering the refraction effect to get an approximate optimization result. In the second step, the approximate result of the first step is used as the initial value for ZEMAX to complete the optimization including the refraction effect. An example of design problem was considered. The simulation results of ZEMAX proved that the spectral image quality of a replicated concave grating is comparable with that of a directly recorded concave grating.
Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1996-01-01
The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerer, A.; Ilkisik, O.M.
1997-01-01
Topographic irregularities cause some distortions of magnetotelluric (MT) fields. In the vicinity of a topographic feature, the TM-mode distortion increases with the height and inclination of the slope. It is well-known that TM-mode topographic effects are much greater than TE-mode distortions. The authors have made a study of MT anomalies in TM-mode due to two-dimensional topography. In order to reduce these effects, the distortion tensor stripping technique was used. After corrections, the resulting data can be interpreted as if they were obtained over a flat surface and depend only on the subsurface structure. However, this technique sometimes causes some geometricalmore » distortions of the real subsurface structure. One of the aims is to overcome this failure. The authors have modified the correction coefficients by considering the actual one-dimensional geology. Model studies showed that this approach is especially useful in removing the terrain effects on complex 2D subsurface structures. The other purpose of this study is to emphasize the importance of a proper terrain correction for data from sites having mountainous topography over complex geology, e.g., strike-slip faults, suture zones and rift valleys. Some examples of MT data sets collected from the North Anatolian Fault Zone and from the thrust regions of the Western Taurides will be presented.« less
ICESAT GLAS Altimetry Measurements: Received Signal Dynamic Range and Saturation Correction
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Abshire, James B.; Borsa, Adrian A.; Fricker, Helen Amanda; Yi, Donghui; Dimarzio, John P.; Paolo, Fernando S.; Brunt, Kelly M.; Harding, David J.; Neumann, Gregory A.
2017-01-01
NASAs Ice, Cloud, and land Elevation Satellite (ICESat), which operated between 2003 and 2009, made the first satellite-based global lidar measurement of earths ice sheet elevations, sea-ice thickness, and vegetation canopy structure. The primary instrument on ICESat was the Geoscience Laser Altimeter System (GLAS), which measured the distance from the spacecraft to the earth's surface via the roundtrip travel time of individual laser pulses. GLAS utilized pulsed lasers and a direct detection receiver consisting of a silicon avalanche photodiode and a waveform digitizer. Early in the mission, the peak power of the received signal from snow and ice surfaces was found to span a wider dynamic range than anticipated, often exceeding the linear dynamic range of the GLAS 1064-nm detector assembly. The resulting saturation of the receiver distorted the recorded signal and resulted in range biases as large as approximately 50 cm for ice- and snow-covered surfaces. We developed a correction for this saturation range bias based on laboratory tests using a spare flight detector, and refined the correction by comparing GLAS elevation estimates with those derived from Global Positioning System surveys over the calibration site at the salar de Uyuni, Bolivia. Applying the saturation correction largely eliminated the range bias due to receiver saturation for affected ICESat measurements over Uyuni and significantly reduced the discrepancies at orbit crossovers located on flat regions of the Antarctic ice sheet.
NASA Astrophysics Data System (ADS)
Huang, Huabing; Liu, Caixia; Wang, Xiaoyi; Biging, Gregory S.; Chen, Yanlei; Yang, Jun; Gong, Peng
2017-07-01
Vegetation height is an important parameter for biomass assessment and vegetation classification. However, vegetation height data over large areas are difficult to obtain. The existing vegetation height data derived from the Ice, Cloud and land Elevation Satellite (ICESat) data only include laser footprints in relatively flat forest regions (<5°). Thus, a large portion of ICESat data over sloping areas has not been used. In this study, we used a new slope correction method to improve the accuracy of estimates of vegetation heights for regions where slopes fall between 5° and 15°. The new method enabled us to use more than 20% additional laser data compared with the existing vegetation height data which only uses ICESat data in relatively flat areas (slope < 5°) in China. With the vegetation height data extracted from ICESat footprints and ancillary data including Moderate Resolution Imaging Spectroradiometer (MODIS) derived data (canopy cover, reflectances and leaf area index), climate data, and topographic data, we developed a wall to wall vegetation height map of China using the Random Forest algorithm. We used the data from 416 field measurements to validate the new vegetation height product. The coefficient of determination (R2) and RMSE of the new vegetation height product were 0.89 and 4.73 m respectively. The accuracy of the product is significantly better than that of the two existing global forest height products produced by Lefsky (2010) and Simard et al. (2011), when compared with the data from 227 field measurements in our study area. The new vegetation height data demonstrated clear distinctions among forest, shrub and grassland, which is promising for improving the classification of vegetation and above-ground forest biomass assessment in China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silvani, M. I.; Almeida, G. L.; Lopes, R. T.
Radiographic images acquired with point-like gamma-ray sources exhibit a desirable low penumbra effects specially when positioned far away from the set object-detector. Such an arrangement frequently is not affordable due to the limited flux provided by a distant source. A closer source, however, has two main drawbacks, namely the degradation of the spatial resolution - as actual sources are only approximately punctual - and the non-homogeneity of the beam hitting the detector, which creates a false attenuation map of the object being inspected. This non-homogeneity is caused by the beam divergence itself and by the different thicknesses traversed the beammore » even if the object were an homogeneous flat plate. In this work, radiographic images of objects with different geometries, such as flat plates and pipes have undergone a correction of beam divergence and attenuation addressing the experimental verification of the capability and soundness of an algorithm formerly developed to generate and process synthetic images. The impact of other parameters, including source-detector gap, attenuation coefficient, ratio defective-to-main hull thickness and counting statistics have been assessed for specifically tailored test-objects aiming at the evaluation of the ability of the proposed method to deal with different boundary conditions. All experiments have been carried out with an X-ray sensitive Imaging Plate and reactor-produced {sup 198}Au and {sup 165}Dy sources. The results have been compared with other technique showing a better capability to correct the attenuation map of inspected objects unveiling their inner structure otherwise concealed by the poor contrast caused by the beam divergence and attenuation, in particular for those regions far apart from the vertical of the source.« less
Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.
2016-08-17
The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less
Long-Face Dentofacial Deformities: Occlusion and Facial Esthetic Surgical Outcomes.
Posnick, Jeffrey C; Liu, Samuel; Tremont, Timothy J
2018-06-01
The purpose of this study was to document malocclusion and facial dysmorphology in a series of patients with long face (LF) and chronic obstructive nasal breathing before treatment and the outcomes after bimaxillary orthognathic, osseous genioplasty, and intranasal surgery. A retrospective cohort study of patients with LF undergoing bimaxillary, chin, and intranasal (septoplasty and inferior turbinate reduction) surgery was implemented. Predictor variables were grouped into demographic, anatomic, operative, and longitudinal follow-up categories. Primary outcome variables were the initial postoperative occlusion achieved (T 2 ; 5 weeks after surgery) and the occulsion maintained long-term (>2 years after surgery). Six key occlusion parameters were assessed: overjet, overbite, coincidence of dental midlines, canine Angle classification, and molar vertical and transverse positions. The second outcome variable was the facial esthetic results. Photographs in 6 views were analyzed to document 7 facial contour characteristics. Seventy-eight patients met the inclusion criteria. Average age at surgery was 24 years (range, 13 to 54 yr). The study included 53 female patients (68%). Findings confirmed that occlusion after initial surgical healing (T 2 ) met the objectives for all parameters in 97% of patients (76 of 78). Most (68 of 78; 87%) maintained a favorable anterior and posterior occlusion for each parameter studied long-term (mean, 5 years 5 months). Facial contour deformities at presentation included prominent nose (63%), flat cheekbones (96%), flat midface (96%), weak chin (91%), obtuse neck-to-chin angle (56%), wide lip separation (95%), and excess maxillary dental show (99%). Correction of all pretreatment facial contour deformities was confirmed in 92% of patients after surgery. Long face patients with higher preoperative body mass index levels were more likely to have residual facial dysmorphology after surgery (P = .0009). Using orthognathic surgery techniques, patients with LF dentofacial deformity achieved the planned occlusion and most maintained the corrected occlusion long-term. In unoperated patients with LF, a "facial esthetic type" was identified. Orthognathic surgery proved effective in correcting associated facial dysmorphology in most patients. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Prior-based artifact correction (PBAC) in computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heußer, Thorsten, E-mail: thorsten.heusser@dkfz-heidelberg.de; Brehm, Marcus; Ritschl, Ludwig
2014-02-15
Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form ofmore » a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.« less
Bolte, John F B
2016-09-01
Personal exposure measurements of radio frequency electromagnetic fields are important for epidemiological studies and developing prediction models. Minimizing biases and uncertainties and handling spatial and temporal variability are important aspects of these measurements. This paper reviews the lessons learnt from testing the different types of exposimeters and from personal exposure measurement surveys performed between 2005 and 2015. Applying them will improve the comparability and ranking of exposure levels for different microenvironments, activities or (groups of) people, such that epidemiological studies are better capable of finding potential weak correlations with health effects. Over 20 papers have been published on how to prevent biases and minimize uncertainties due to: mechanical errors; design of hardware and software filters; anisotropy; and influence of the body. A number of biases can be corrected for by determining multiplicative correction factors. In addition a good protocol on how to wear the exposimeter, a sufficiently small sampling interval and sufficiently long measurement duration will minimize biases. Corrections to biases are possible for: non-detects through detection limit, erroneous manufacturer calibration and temporal drift. Corrections not deemed necessary, because no significant biases have been observed, are: linearity in response and resolution. Corrections difficult to perform after measurements are for: modulation/duty cycle sensitivity; out of band response aka cross talk; temperature and humidity sensitivity. Corrections not possible to perform after measurements are for: multiple signals detection in one band; flatness of response within a frequency band; anisotropy to waves of different elevation angle. An analysis of 20 microenvironmental surveys showed that early studies using exposimeters with logarithmic detectors, overestimated exposure to signals with bursts, such as in uplink signals from mobile phones and WiFi appliances. Further, the possible corrections for biases have not been fully applied. The main findings are that if the biases are not corrected for, the actual exposure will on average be underestimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C.
Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a referencemore » beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. Conclusions: A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms.« less
Flat-Sky Pseudo-Cls Analysis for Weak Gravitational Lensing
NASA Astrophysics Data System (ADS)
Asgari, Marika; Taylor, Andy; Joachimi, Benjamin; Kitching, Thomas D.
2018-05-01
We investigate the use of estimators of weak lensing power spectra based on a flat-sky implementation of the 'Pseudo-CI' (PCl) technique, where the masked shear field is transformed without regard for masked regions of sky. This masking mixes power, and 'E'-convergence and 'B'-modes. To study the accuracy of forward-modelling and full-sky power spectrum recovery we consider both large-area survey geometries, and small-scale masking due to stars and a checkerboard model for field-of-view gaps. The power spectrum for the large-area survey geometry is sparsely-sampled and highly oscillatory, which makes modelling problematic. Instead, we derive an overall calibration for large-area mask bias using simulated fields. The effects of small-area star masks can be accurately corrected for, while the checkerboard mask has oscillatory and spiky behaviour which leads to percent biases. Apodisation of the masked fields leads to increased biases and a loss of information. We find that we can construct an unbiased forward-model of the raw PCls, and recover the full-sky convergence power to within a few percent accuracy for both Gaussian and lognormal-distributed shear fields. Propagating this through to cosmological parameters using a Fisher-Matrix formalism, we find we can make unbiased estimates of parameters for surveys up to 1,200 deg2 with 30 galaxies per arcmin2, beyond which the percent biases become larger than the statistical accuracy. This implies a flat-sky PCl analysis is accurate for current surveys but a Euclid-like survey will require higher accuracy.
Eikonal instability of Gauss-Bonnet-(anti-)-de Sitter black holes
NASA Astrophysics Data System (ADS)
Konoplya, R. A.; Zhidenko, A.
2017-05-01
Here we have shown that asymptotically anti-de Sitter (AdS) black holes in the Einstein-Gauss-Bonnet (GB) theory are unstable under linear perturbations of space-time in some region of parameters. This (eikonal) instability develops at high multipole numbers. We found the exact parametric regions of the eikonal instability and extended this consideration to asymptotically flat and de Sitter cases. The approach to the threshold of instability is driven by purely imaginary quasinormal modes, which are similar to those found recently in Grozdanov, Kaplis, and Starinets, [J. High Energy Phys. 07 (2016) 151, 10.1007/JHEP07(2016)151] for the higher curvature corrected black hole with the planar horizon. The found instability may indicate limits of holographic applicability of the GB-AdS backgrounds. Recently, through the analysis of critical behavior in AdS space-time in the presence of the Gauss-Bonnet term, it was shown [Deppe et al, Phys. Rev. Lett. 114, 071102 (2015), 10.1103/PhysRevLett.114.071102], that, if the total energy content of the AdS space-time is small, then no black holes can be formed with mass less than some critical value. A similar mass gap was also found when considering collapse of mass shells in asymptotically flat Gauss-Bonnet theories [Frolov, Phys. Rev. Lett. 115, 051102 (2015), 10.1103/PhysRevLett.115.051102]. The found instability of all sufficiently small Einstein-Gauss-Bonnet-AdS, dS and asymptotically flat black holes may explain the existing mass gaps in their formation.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Mann, Michael J.
1992-01-01
A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.
The 2-d CCD Data Reduction Cookbook
NASA Astrophysics Data System (ADS)
Davenhall, A. C.; Privett, G. J.; Taylor, M. B.
This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.
One-loop gravitational wave spectrum in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Roura, Albert; Verdaguer, Enric
2012-08-01
The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.
Tensor perturbations during inflation in a spatially closed Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonga, Béatrice; Gupt, Brajesh; Yokomizo, Nelson, E-mail: bpb165@psu.edu, E-mail: bgupt@gravity.psu.edu, E-mail: yokomizo@gravity.psu.edu
2017-05-01
In a recent paper [1], we studied the evolution of the background geometry and scalar perturbations in an inflationary, spatially closed Friedmann-Lemaȋtre-Robertson-Walker (FLRW) model having constant positive spatial curvature and spatial topology S{sup 3}. Due to the spatial curvature, the early phase of slow-roll inflation is modified, leading to suppression of power in the scalar power spectrum at large angular scales. In this paper, we extend the analysis to include tensor perturbations. We find that, similarly to the scalar perturbations, the tensor power spectrum also shows suppression for long wavelength modes. The correction to the tensor spectrum is limited tomore » the very long wavelength modes, therefore the resulting observable CMB B-mode polarization spectrum remains practically the same as in the standard scenario with flat spatial sections. However, since both the tensor and scalar power spectra are modified, there are scale dependent corrections to the tensor-to-scalar ratio that leads to violation of the standard slow-roll consistency relation.« less
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-04
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.
Pliocene shorelines and the deformation of passive margins.
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Raymo, Maureen; Austermann, Jacqueline; Mitrovica, Jerry; Janßen, Alexander
2016-04-01
Characteristic geomorphology described from three Pliocene scarps in Rovere et al. [2014] was used to guide a global search for additional Pliocene age scarps that could be used to document former Pliocene shoreline locations. Each of the Rovere et al. [2014] paleo-shorelines was measured at the scarp toe abutting a flat coastal plain. In this study, nine additional such scarp-toe paleo-shorelines were identified. Each of these scarps has been independently dated to the Plio-Pleistocene; however, they were never unified by a single formation mechanism. Even when corrected for Glacial Isostatic Adjustment post-depositional effects, Post-Pliocene deformation of the inferred shorelines precludes a direct assessment of maximum Pliocene sea level height at the scarp toes. However, careful interpretation of the processes at the inferred paleo-shoreline suggests specific amplitudes of dynamic topography at each location, which could lead to a corrected maximum sea level height and provide a target dataset with which to compare dynamic topography model output.
Riemann correlator in de Sitter including loop corrections from conformal fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric; Roura, Albert, E-mail: mfroeb@ffn.ub.edu, E-mail: albert.roura@uni-ulm.de, E-mail: enric.verdaguer@ub.edu
2014-07-01
The Riemann correlator with appropriately raised indices characterizes in a gauge-invariant way the quantum metric fluctuations around de Sitter spacetime including loop corrections from matter fields. Specializing to conformal fields and employing a method that selects the de Sitter-invariant vacuum in the Poincaré patch, we obtain the exact result for the Riemann correlator through order H{sup 4}/m{sub p}{sup 4}. The result is expressed in a manifestly de Sitter-invariant form in terms of maximally symmetric bitensors. Its behavior for both short and long distances (sub- and superhorizon scales) is analyzed in detail. Furthermore, by carefully taking the flat-space limit, the explicitmore » result for the Riemann correlator for metric fluctuations around Minkowki spacetime is also obtained. Although the main focus is on free scalar fields (our calculation corresponds then to one-loop order in the matter fields), the result for general conformal field theories is also derived.« less
Interfacial Structure and Chemistry of GaN on Ge(111)
NASA Astrophysics Data System (ADS)
Zhang, Siyuan; Zhang, Yucheng; Cui, Ying; Freysoldt, Christoph; Neugebauer, Jörg; Lieten, Ruben R.; Barnard, Jonathan S.; Humphreys, Colin J.
2013-12-01
The interface of GaN grown on Ge(111) by plasma-assisted molecular beam epitaxy is resolved by aberration corrected scanning transmission electron microscopy. A novel interfacial structure with a 5∶4 closely spaced atomic bilayer is observed that explains why the interface is flat, crystalline, and free of GeNx. Density functional theory based total energy calculations show that the interface bilayer contains Ge and Ga atoms, with no N atoms. The 5∶4 bilayer at the interface has a lower energy than a direct stacking of GaN on Ge(111) and enables the 5∶4 lattice-matching growth of GaN.
NASA Astrophysics Data System (ADS)
Almeida, A. P.; Braz, D.; Nogueira, L. P.; Colaço, M. V.; Soares, J.; Cardoso, S. C.; Garcia, E. S.; Azambuja, P.; Gonzalez, M. S.; Mohammadi, S.; Tromba, G.; Barroso, R. C.
2014-02-01
We have used phase-contrast X-ray microtomography (PPC-μCT) to study the head of the blood-feeding bug, Rhodnius prolixus, which is one of the most important insect vector of Trypanosoma cruzi, ethiologic agent of Chagas disease in Latin America. Images reconstructed from phase-retrieved projections processed by ANKA phase are compared to those obtained through direct tomographic reconstruction of the flat-field-corrected transmission radiographs. It should be noted that the relative locations of the important morphological internal structures are observable with a precision that is difficult to obtain without the phase retrieval approach.
Variability of wetland reflectance and its effect on automatic categorization of satellite imagery
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator); Bartlett, D.
1977-01-01
The author has identified the following significant results. Land cover categorization of data from the same overpass in four test wetland areas was carried out using a four category classification system. The tests indicate that training data based on in situ reflectance measurements and atmospheric correction of LANDSAT data can produce comparable accuracy of categorization to that achieved using more than four wetlands cover categories (salt marsh cordgrass, salt hay, unvegetated, and water tidal flat) produced overall classification accuracies of 85% by conventional and relative radiance training and 81% by use of in situ measurements. Overall mapping accuracies were 76% and 72% respectively.
Stationary nonimaging concentrator as a second stage element in tracking systems
NASA Astrophysics Data System (ADS)
Kritchman, E. M.; Snail, K. A.; Ogallagher, J.; Winston, R.
1983-01-01
An increase in the concentration in line focus solar concentrators is shown to be available using an evacuated compound parabolic concentrator (CPC) tube as a second stage element. The absorber is integrated into an evacuated tube with a transparent upper section and a reflective lower section, with a selective coating on the absorber surface. The overall concentration is calculated in consideration of a parabolic mirror in a trough configuration, a flat Fresnel lens over the top, or a color and coma corrected Fresnel lens. The resulting apparatus is noted to also suppress thermal losses due to conduction, convection, and IR radiation.
Julian, B.R.; Evans, J.R.; Pritchard, M.J.; Foulger, G.R.
2000-01-01
Some computer programs based on the Aki-Christofferson-Husebye (ACH) method of teleseismic tomography contain an error caused by identifying local grid directions with azimuths on the spherical Earth. This error, which is most severe in high latitudes, introduces systematic errors into computed ray paths and distorts inferred Earth models. It is best dealt with by explicity correcting for the difference between true and grid directions. Methods for computing these directions are presented in this article and are likely to be useful in many other kinds of regional geophysical studies that use Cartesian coordinates and flat-earth approximations.
Exact relativistic Toda chain eigenfunctions from Separation of Variables and gauge theory
NASA Astrophysics Data System (ADS)
Sciarappa, Antonio
2017-10-01
We provide a proposal, motivated by Separation of Variables and gauge theory arguments, for constructing exact solutions to the quantum Baxter equation associated to the N-particle relativistic Toda chain and test our proposal against numerical results. Quantum Mechanical non-perturbative corrections, essential in order to obtain a sensible solution, are taken into account in our gauge theory approach by considering codimension two defects on curved backgrounds (squashed S 5 and degenerate limits) rather than flat space; this setting also naturally incorporates exact quantization conditions and energy spectrum of the relativistic Toda chain as well as its modular dual structure.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog and its supplement, this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
Design of tracking and detecting lens system by diffractive optical method
NASA Astrophysics Data System (ADS)
Yang, Jiang; Qi, Bo; Ren, Ge; Zhou, Jianwei
2016-10-01
Many target-tracking applications require an optical system to acquire the target for tracking and identification. This paper describes a new detecting optical system that can provide automatic flying object detecting, tracking and measuring in visible band. The main feature of the detecting lens system is the combination of diffractive optics with traditional lens design by a technique was invented by Schupmann. Diffractive lens has great potential for developing the larger aperture and lightweight lens. First, the optical system scheme was described. Then the Schupmann achromatic principle with diffractive lens and corrective optics is introduced. According to the technical features and requirements of the optical imaging system for detecting and tracking, we designed a lens system with flat surface Fresnel lens and cancels the optical system chromatic aberration by another flat surface Fresnel lens with effective focal length of 1980mm, an F-Number of F/9.9 and a field of view of 2ωω = 14.2', spatial resolution of 46 lp/mm and a working wavelength range of 0.6 0.85um. At last, the system is compact and easy to fabricate and assembly, the diffuse spot size and MTF function and other analysis provide good performance.
Advancements in ion beam figuring of very thin glass plates (Conference Presentation)
NASA Astrophysics Data System (ADS)
Civitani, M.; Ghigo, M.; Hołyszko, J.; Vecchi, G.; Basso, S.; Cotroneo, V.; DeRoo, C. T.; Schwartz, E. D.; Reid, P. B.
2017-09-01
The high-quality surface characteristics, both in terms of figure error and of micro-roughness, required on the mirrors of a high angular resolution x-ray telescope are challenging, but in principle well suited with a deterministic and non-contact process like the ion beam figuring. This process has been recently proven to be compatible even with very thin (thickness around 0.4mm) sheet of glasses (like D263 and Eagle). In the last decade, these types of glass have been investigated as substrates for hot slumping, with residual figure errors of hundreds of nanometres. In this view, the mirrors segments fabrication could be envisaged as a simple two phases process: a first replica step based on hot slumping (direct/indirect) followed by an ion beam figuring which can be considered as a post-fabrication correction method. The first ion beam figuring trials, realized on flat samples, showed that the micro-roughness is not damaged but a deeper analysis is necessary to characterize and eventually control/compensate the glass shape variations. In this paper, we present the advancements in the process definition, both on flat and slumped glass samples.
Effect of foot shape on the three-dimensional position of foot bones.
Ledoux, William R; Rohr, Eric S; Ching, Randal P; Sangeorzan, Bruce J
2006-12-01
To eliminate some of the ambiguity in describing foot shape, we developed three-dimensional (3D), objective measures of foot type based on computerized tomography (CT) scans. Feet were classified via clinical examination as pes cavus (high arch), neutrally aligned (normal arch), asymptomatic pes planus (flat arch with no pain), or symptomatic pes planus (flat arch with pain). We enrolled 10 subjects of each foot type; if both feet were of the same foot type, then each foot was scanned (n=65 total). Partial weightbearing (20% body weight) CT scans were performed. We generated embedded coordinate systems for each foot bone by assuming uniform density and calculating the inertial matrix. Cardan angles were used to describe five bone-to-bone relationships, resulting in 15 angular measurements. Significant differences were found among foot types for 12 of the angles. The angles were also used to develop a classification tree analysis, which determined the correct foot type for 64 of the 65 feet. Our measure provides insight into how foot bone architecture differs between foot types. The classification tree analysis demonstrated that objective measures can be used to discriminate between feet with high, normal, and low arches. Copyright (c) 2006 Orthopaedic Research Society.
Open/closed string duality and relativistic fluids
NASA Astrophysics Data System (ADS)
Niarchos, Vasilis
2016-07-01
We propose an open/closed string duality in general backgrounds extending previous ideas about open string completeness by Ashoke Sen. Our proposal sets up a general version of holography that works in gravity as a tomographic principle. We argue, in particular, that previous expectations of a supergravity/Dirac-Born-Infeld (DBI) correspondence are naturally embedded in this conjecture and can be tested in a well-defined manner. As an example, we consider the correspondence between open string field theories on extremal D-brane setups in flat space in the large-N , large 't Hooft limit, and asymptotically flat solutions in ten-dimensional type II supergravity. We focus on a convenient long-wavelength regime, where specific effects of higher-spin open string modes can be traced explicitly in the dual supergravity computation. For instance, in this regime we show how the full Abelian DBI action arises from supergravity as a straightforward reformulation of relativistic hydrodynamics. In the example of a (2 +1 )-dimensional open string theory this reformulation involves an Abelian Hodge duality. We also point out how different deformations of the DBI action, related to higher-derivative corrections and non-Abelian effects, can arise in this context as deformations in corresponding relativistic hydrodynamics.
A brief test of the Hewlett-Packard MEMS seismic accelerometer
Homeijer, Brian D.; Milligan, Donald J.; Hutt, Charles R.
2014-01-01
Testing was performed on a prototype of Hewlett-Packard (HP) Micro-Electro-Mechanical Systems (MEMS) seismic accelerometer at the U.S. Geological Survey’s Albuquerque Seismological Laboratory. This prototype was built using discrete electronic components. The self-noise level was measured during low seismic background conditions and found to be 9.8 ng/√Hz at periods below 0.2 s (frequencies above 5 Hz). The six-second microseism noise was also discernible. The HP MEMS accelerometer was compared to a Geotech Model GS-13 reference seismometer during seismic noise and signal levels well above the self-noise of the accelerometer. Matching power spectral densities (corrected for accelerometer and seismometer responses to represent true ground motion) indicated that the HP MEMS accelerometer has a flat (constant) response to acceleration from 0.0125 Hz to at least 62.5 Hz. Tilt calibrations of the HP MEMS accelerometer verified that the flat response to acceleration extends to 0 Hz. Future development of the HP MEMS accelerometer includes replacing the discreet electronic boards with a low power application-specific integrated circuit (ASIC) and increasing the dynamic range of the sensor to detect strong motion signals above one gravitational acceleration, while maintaining the self-noise observed during these tests.
Calculating forces on thin flat plates with incomplete vorticity-field data
NASA Astrophysics Data System (ADS)
Limacher, Eric; Morton, Chris; Wood, David
2016-11-01
Optical experimental techniques such as particle image velocimetry (PIV) permit detailed quantification of velocities in the wakes of bluff bodies. Patterns in the wake development are significant to force generation, but it is not trivial to quantitatively relate changes in the wake to changes in measured forces. Key difficulties in this regard include: (i) accurate quantification of velocities close to the body, and (ii) the effect of missing velocity or vorticity data in regions where optical access is obscured. In the present work, we consider force formulations based on the vorticity field, wherein mathematical manipulation eliminates the need for accurate near-body velocity information. Attention is restricted to nominally two dimensional problems, namely (i) a linearly accelerating flat plate, investigated using PIV in a water tunnel, and (ii) a pitching plate in a freestream flow, as investigated numerically by Wang & Eldredge (2013). The effect of missing vorticity data on the pressure side of the plate has a significant impact on the calculation of force for the pitching plate test case. Fortunately, if the vorticity on the pressure side remains confined to a thin boundary layer, simple corrections can be applied to recover a force estimate.
Supernatural inflation: inflation from supersymmetry with no (very) small parameters
NASA Astrophysics Data System (ADS)
Randall, Lisa; SoljačiĆ, Marin; Guth, Alan H.
1996-02-01
Most models of inflation have small parameters, either to guarantee sufficient inflation or the correct magnitude of the density perturbations. In this paper we show that, in supersymmetric theories with weak-scale supersymmetry breaking, one can construct viable inflationary models in which the requisite parameters appear naturally in the form of the ratio of mass scales that are already present in the theory. Successful inflationary models can be constructed from the flat-direction fields of a renormalizable supersymmetric potential, and such models can be realized even in the context of a simple GUT extension of the MSSM. We evade naive ``naturalness'' arguments by allowing for more than one field to be relevant to inflation, as in ``hybrid inflation'' models, and we argue that this is the most natural possibility if inflation fields are to be associated with flat direction fields of a supersymmetric theory. Such models predict a very low Hubble constant during inflation, of order 103-104 GeV, a scalar density perturbation index n which is very close to or greater than unity, and negligible tensor perturbations. In addition, these models lead to a large spike in the density perturbation spectrum at short wavelengths.
Nonlocal quantum effective actions in Weyl-Flat spacetimes
NASA Astrophysics Data System (ADS)
Bautista, Teresa; Benevides, André; Dabholkar, Atish
2018-06-01
Virtual massless particles in quantum loops lead to nonlocal effects which can have interesting consequences, for example, for primordial magnetogenesis in cosmology or for computing finite N corrections in holography. We describe how the quantum effective actions summarizing these effects can be computed efficiently for Weyl-flat metrics by integrating the Weyl anomaly or, equivalently, the local renormalization group equation. This method relies only on the local Schwinger-DeWitt expansion of the heat kernel and allows for a re-summation of the anomalous leading large logarithms of the scale factor, log a( x), in situations where the Weyl factor changes by several e-foldings. As an illustration, we obtain the quantum effective action for the Yang-Mills field coupled to massless matter, and the self-interacting massless scalar field. Our action reduces to the nonlocal action obtained using the Barvinsky-Vilkovisky covariant perturbation theory in the regime R 2 ≪ ∇2 R for a typical curvature scale R, but has a greater range of validity effectively re-summing the covariant perturbation theory to all orders in curvatures. In particular, it is applicable also in the opposite regime R 2 ≫ ∇2 R, which is often of interest in cosmology.
On the flux of fluctuation energy in a collisional grain flow at a flat, frictional wall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, J.T.; Louge, M.Y.
We consider a flow of colliding spheres that interacts with a flat, frictional wall and calculate the flux of fluctuation energy in two limits. In the first limit, all spheres slide upon contact with the wall. Here, we refine the calculations of Jenkins [J. Appl. Mech. {bold 59}, 120 (1992)] and show that a correlation between two orthogonal components of the fluctuation velocity of the point of contact of the grains with the wall provides a substantial correction to the flux originally predicted. In the other limit, the granular material is agitated but the mean velocity of the contact pointsmore » with respect to the wall is zero and Jenkins{close_quote} earlier calculation is improved by distinguishing between those contacts that slide in a collision and those that stick. The new expressions for the flux agree well with the computer simulations of Louge [Phys. Fluids {bold 6}, 2253 (1994)]. Finally, we extend the expression for zero mean sliding to incorporate small sliding and obtain an approximate expression for the flux between the two limits. {copyright} {ital 1997 American Institute of Physics.}« less
Bending of Light in Modified Gravity at Large Distances
NASA Technical Reports Server (NTRS)
Sultana, Joseph; Kazanas, Demosthenes
2012-01-01
We discuss the bending of light in a recent model for gravity at large distances containing a Rindler type acceleration proposed by Grumiller. We consider the static, spherically symmetric metric with cosmological constant and Rindler-like term 2ar presented in this model, and we use the procedure by Rindler and Ishak. to obtain the bending angle of light in this metric. Earlier work on light bending in this model by Carloni, Grumiller, and Preis, using the method normally employed for asymptotically flat space-times, led to a conflicting result (caused by the Rindler-like term in the metric) of a bending angle that increases with the distance of closest approach r(sub 0) of the light ray from the centrally concentrated spherically symmetric matter distribution. However, when using the alternative approach for light bending in nonasymptotically flat space-times, we show that the linear Rindler-like term produces a small correction to the general relativistic result that is inversely proportional to r(sub 0). This will in turn affect the bounds on Rindler acceleration obtained earlier from light bending and casts doubts on the nature of the linear term 2ar in the metric
Geldon, A.L.
1993-01-01
Boreholes UE-25c #1, UE-25c #2, and UE-25c #3 (collectively called the C-holes) each were drilled to a depth of 914.4 meters at Yucca Mountain, on the Nevada Test Site, in 1983 and 1984 for the purpose of conducting aquifer and tracer tests. Each of the boreholes penetrated the Paintbrush Tuff and the tuffs and lavas of Calico Hills and bottomed in the Crater Flat Tuff. The geologic units penetrated consist of devitrified to vitrophyric, nonwelded to densely welded, ash-flow tuff, tuff breccia, ash-fall tuff, and bedded tuff. Below the water table, which is at an average depth of 401.6 meters below land surface, the rocks are argillic and zeolitic. The geologic units at the C-hole complex strike N. 2p W. and dip 15p to 21p NE. They are cut by several faults, including the Paintbrush Canyon Fault, a prominent normal fault oriented S. 9p W., 52.2p NW. The rocks at the C-hole complex are fractured extensively, with most fractures oriented approximately perpendicular to the direction of regional least horizontal principal stress. In the Crater Flat Tuff and the tuffs and lavas of Calico Hills, fractures strike predominantly between S. 20p E. and S. 20p W. and secondarily between S. 20p E. and S. 60p E. In the Topopah Spring Member of the Paintbrush Tuff, however, southeasterly striking fractures predominate. Most fractures are steeply dipping, although shallowly dipping fractures occur in nonwelded and reworked tuff intervals of the Crater Flat Tuff. Mineral-filled fractures are common in the tuff breccia zone of the Tram Member of the Crater Flat Tuff, and, also, in the welded tuff zone of the Bullfrog Member of the Crater Flat Tuff. The fracture density of geologic units in the C-holes was estimated to range from 1.3 to 7.6 fractures per cubic meter. Most of these estimates appear to be the correct order of magnitude when compared to transect measurements and core data from other boreholes 1.3 orders of magnitude too low. Geophysical data and laboratory analyses were used to determine matrix hydrologic properties of the tuffs and lavas of Calico Hills and the Crater Flat Tuff in the C-holes. The porosity ranged from 12 to 43 percent and, on the average, was larger in nonwelded to partially welded, ash-flow tuff, ashfall tuff, and reworked tuff than in moderately to densely welded ash-flow tuff. The pore-scale horizontal permeability of nine samples ranged from 5.7x10'3 to 2.9 millidarcies, and the pore-scale vertical permeability of these samples ranged from 3.7x10'* to 1.5 millidarcies. Ratios of pore-scale horizontal to vertical permeability generally ranged from 0.7 to 2. Although the number of samples was small, values of pore-scale permeability determined were consistent with samples from other boreholes at Yucca Mountain. The specific storage of nonwelded to partially welded ash-flow tuff, ash-fall tuff, and reworked tuff was estimated from porosity and elasticity to' be 2xlO'6 per meter, twice the specific storage of moderately to densely welded ash-flow tuff and tuff breccia. The storativity of geologic units, based on their average thickness (corrected for bedding dip) and specific storage, was estimated to range from 1xlO's to 2xlO'4. Ground-water flow in the Tertiary rocks of the Yucca Mountain area is not confined by strata but appears to result from the random intersection of water-bearing fractures and faults. Even at the C-hole complex, an area of only 1,027 square meters, water-producing zones during pumping tests vary from borehole to borehole. In borehole UE-25c #1, water is produced mainly from the lower, nonwelded to welded zone of the Bullfrog Member of the Crater Flat Tuff and secondarily from the tuff-breccia zone of the Tram Member of the Crater Flat Tuff. In borehole UE-25c #3, water is produced in nearly equal proportions from these two intervals and the central, moderately to densely welded zone of the Bullfrog Member. In borehole UE-25c #2, almost all production comes from the moderately to dense
Near-station terrain corrections for gravity data by a surface-integral technique
Gettings, M.E.
1982-01-01
A new method of computing gravity terrain corrections by use of a digitizer and digital computer can result in substantial savings in the time and manual labor required to perform such corrections by conventional manual ring-chart techniques. The method is typically applied to estimate terrain effects for topography near the station, for example within 3 km of the station, although it has been used successfully to a radius of 15 km to estimate corrections in areas where topographic mapping is poor. Points (about 20) that define topographic maxima, minima, and changes in the slope gradient are picked on the topographic map, within the desired radius of correction about the station. Particular attention must be paid to the area immediately surrounding the station to ensure a good topographic representation. The horizontal and vertical coordinates of these points are entered into the computer, usually by means of a digitizer. The computer then fits a multiquadric surface to the input points to form an analytic representation of the surface. By means of the divergence theorem, the gravity effect of an interior closed solid can be expressed as a surface integral, and the terrain correction is calculated by numerical evaluation of the integral over the surfaces of a cylinder, The vertical sides of which are at the correction radius about the station, the flat bottom surface at the topographic minimum, and the upper surface given by the multiquadric equation. The method has been tested with favorable results against models for which an exact result is available and against manually computed field-station locations in areas of rugged topography. By increasing the number of points defining the topographic surface, any desired degree of accuracy can be obtained. The method is more objective than manual ring-chart techniques because no average compartment elevations need be estimated ?
Light Noble Gas Abundances in the Solar Wind Trapped by Chondritic Metal
NASA Astrophysics Data System (ADS)
Murer, Ch.; Bauer, H.; Wieler, R.
1995-09-01
The heavy solar noble gases Ar-Xe are retained elementally unfractionated relative to the incoming solar corpuscular radiation in lunar soils, as is shown by the flat profiles of Ar/Kr and Kr/Xe throughout closed system stepped etch extractions [1, 2]. In contrast, He/Ar and Ne/Ar reach present-day solar wind (SW) values only towards the end of the runs, indicating that the well known fractionating losses of solar He and Ne from lunar samples affect the shallowly sited SW component but not the more deeply implanted SEP (solar energetic particles). Rather flat He/Ar and Ne/Ar profiles were previously observed in stepped etchings of metallic Fe-Ni from solar-gas-rich meteorites [3-5], suggesting that Fe-Ni retains unfractionated He, Ne, and Ar from SW and SEP. Most runs showed some variation in elemental ratios, possibly due to i) experiment-induced fractionation, ii) the different penetration depths of the various gases [4], or iii) variable elemental abundances in SW and SEP. The results of a repeat run on a Fe-Ni separate from the H chondrite Fayetteville are shown in Fig. 1. The ^20Ne/^36Ar ratio is essentially flat and most values fall in the range of 48.5 +/- 7 of the modern SW [6]. The low values in the last three steps are presumably due to fractionated solar noble gases released from silicate impurities by copper-chloride in these final about 10 day extractions, since the lowest value is close to that in bulk samples. We thus cannot confirm a real variation of Ne/Ar with grain depth. The He/Ar pattern is similar to Ne/Ar except that the values of individual steps scatter considerably more. Flat profiles as in Fig. 1 strongly suggest that the average ratios deduced from meteoritic Fe-Ni (in some cases slightly corrected for e. g. contributions from silicates) yield good estimates of the relative light noble gas abundances in SW and SEP trapped by chondritic regoliths. Table 1 shows best values deduced from three chondrites (two runs each). These values differ by less than about 15% from those reported for present day SW and for solar gases in Acfer111 metal [4]. Remarkable is the good agreement of Ne/Ar deduced from meteorites with the SWC ratio, since the derivation of the latter value involved an about 40% correction for solar ^36Ar released from lunar soil and retrapped into the aluminium foils. References: [1] Wieler R. et al. (1993) LPS XXIV, 1519. [2] Wieler R. and Baur H. (1995) Astrophys. J., in press. [3] Murer Ch. et al. (1994) Meteoritics, 29, 506. [4] Pedroni A. and Begemann F. (1994) Meteoritics, 29, 632. [5] Murer Ch. (1995) Ph.D. thesis, ETH Zurich, #10964. [6] Cerutti H. (1974) Ph.D. thesis, Univ. Bern.
Pérez-Olea, J
1991-09-01
In 1566 Alonso de Villadiego was nominated by the Chilean Cabildo as "Adviser and Examiner in Surgery". By means of this edict, the Spanish Crown paralleled its classical health organization, inspired in rules coming from XIIIth century. The Hospital del Socorro was the focal point of these activities. It turned to be prosperous under the administration of "San Juan de Dios" monks (1617), who rebaptized the Hospital with their name. During the administration of the "Universidad de San Felipe" (1738-1839), the Protomedicato followed the standards imposed by the Cahir of Prima Medicina. Domingo Nevin, Frenchman, and José Antonio Ríos, Chilean, were the first and the last doctors in charge of this task. Ríos conducted the antivariolic campaign, supervised the "Midwifery Law" and controlled the medical and paramedical practice. Afterwards, the Institution plunged into a profound crisis to reflourish in 1833 when it was incorporated within the structure of the School of Medicine. Blest, Cox, Bustillos and Moran were the architects of its splendour. With the foundation of the Universidad de Chile in 1842, its Faculty of Medicine took over the Protomedicato functions. The Institution came to an end in 1892.
[On two antique medical texts].
Rosa, Maria Carlota
2005-01-01
The two texts presented here--Regimento proueytoso contra ha pestenença [literally, "useful regime against pestilence"] and Modus curandi cum balsamo ["curing method using balm"]--represent the extent of Portugal's known medical library until circa 1530, produced in gothic letters by foreign printers: Germany's Valentim Fernandes, perhaps the era's most important printer, who worked in Lisbon between 1495 and 1518, and Germdo Galharde, a Frenchman who practiced his trade in Lisbon and Coimbra between 1519 and 1560. Modus curandi, which came to light in 1974 thanks to bibliophile José de Pina Martins, is anonymous. Johannes Jacobi is believed to be the author of Regimento proueytoso, which was translated into Latin (Regimen contra pestilentiam), French, and English. Both texts are presented here in facsimile and in modern Portuguese, while the first has also been reproduced in archaic Portuguese using modern typographical characters. This philological venture into sixteenth-century medicine is supplemented by a scholarly glossary which serves as a valuable tool in interpreting not only Regimento proueytoso but also other texts from the era. Two articles place these documents in historical perspective.
A cosmology-independent calibration of type Ia supernovae data
NASA Astrophysics Data System (ADS)
Hauret, C.; Magain, P.; Biernaux, J.
2018-06-01
Recently, the common methodology used to transform type Ia supernovae (SNe Ia) into genuine standard candles has been suffering criticism. Indeed, it assumes a particular cosmological model (namely the flat ΛCDM) to calibrate the standardisation corrections parameters, i.e. the dependency of the supernova peak absolute magnitude on its colour, post-maximum decline rate and host galaxy mass. As a result, this assumption could make the data compliant to the assumed cosmology and thus nullify all works previously conducted on model comparison. In this work, we verify the viability of these hypotheses by developing a cosmology-independent approach to standardise SNe Ia data from the recent JLA compilation. Our resulting corrections turn out to be very close to the ΛCDM-based corrections. Therefore, even if a ΛCDM-based calibration is questionable from a theoretical point of view, the potential compliance of SNe Ia data does not happen in practice for the JLA compilation. Previous works of model comparison based on these data do not have to be called into question. However, as this cosmology-independent standardisation method has the same degree of complexity than the model-dependent one, it is worth using it in future works, especially if smaller samples are considered, such as the superluminous type Ic supernovae.
Intensity inhomogeneity correction of SD-OCT data using macular flatspace.
Lang, Andrew; Carass, Aaron; Jedynak, Bruno M; Solomon, Sharon D; Calabresi, Peter A; Prince, Jerry L
2018-01-01
Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works by sharpening the intensity histogram, which reduces the variation of intensities within different classes. To apply it here, the data are first converted to a standardized space called macular flat space (MFS). MFS allows the intensities within each layer to be more easily normalized by removing the natural curvature of the retina. N3 is then run on the MFS data using a modified smoothing model, which improves the efficiency of the original algorithm. We show that our method more accurately corrects gain fields on synthetic OCT data when compared to running N3 on non-flattened data. It also reduces the overall variability of the intensities within each layer, without sacrificing contrast between layers, and improves the performance of registration between OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.
High-fidelity artifact correction for cone-beam CT imaging of the brain
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-02-01
CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.
Cosmological backreaction within the Szekeres model and emergence of spatial curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolejko, Krzysztof, E-mail: krzysztof.bolejko@sydney.edu.au
This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature Ω{sub R} (in the FLRW limit Ω{sub R} → Ω {sub k} ). If averaged over global scales the result depends on the assumed global model of the Universe. Withinmore » the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from Ω{sub R} =0 at the CMB to Ω{sub R} ∼ 0.1 at 0 z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ω {sub k} ≠ 0, even if in the early Universe Ω {sub k} = 0) and therefore when analysing low- z cosmological data one should keep Ω {sub k} as a free parameter and independent from the CMB constraints.« less
NASA Astrophysics Data System (ADS)
Dai, Zhenzhen; Joshi, Bishnu P.; Gao, Zhenghong; Lee, Jeonghoon; Ghimire, Navin; Prabhu, Anoop; Wamsteker, Erik J.; Kwon, Richard S.; Elta, Grace H.; Appelman, Henry D.; Owens, Scott R.; Kuick, Rork; Turgeon, Kim K.; Wang, Thomas D.
2017-02-01
Early detection of precursor lesions for colorectal cancer can greatly improve survival. Pre-neoplasia can appear flat with conventional white light endoscopy. Sessile serrated adenomas (SSA) are precursor lesions found primarily in the proximal colon and frequently appear flat and indistinct. We performed a clinical study of n=37 patients using a multimodal endoscopy with a FITC-labeled peptide specific for SSA. Lesions were imaged with white light, reflectance and fluorescence. White light images were acquired before the peptide was applied and were used to help localize regions of abnormal tissues rightly. Co-registered fluorescence and reflectance images were combined to get ratio images thus the distance was corrected. We calculated the target/background ratio (T/B ratio) to quantify the images and found 2.3-fold greater fluorescence intensity for SSA compared with normal tissues. We found the T/B ratio for SSA to be significantly greater than that for normal colonic mucosa with 89.47% sensitivity and 91.67% specificity at the threshold of 1.22. An ROC curve for SSA and normal mucosa was also plotted with area under curve (AUC) of 0.93. The result also shows that SSA and adenoma are statistically significant and can be identified with 78.95% sensitivity and 90.48% specificity at the threshold of 1.66. An ROC curve was plotted with AUC of 0.88. Therefore, our result shows that the application of a multimodal endoscope with fluorescently labeled peptide can quantify images and works especially good for the detection of SSA which is a premalignant flat lesion conferring a high risk of subsequently leading to a colon cancer.
Cosmological backreaction within the Szekeres model and emergence of spatial curvature
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2017-06-01
This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature ΩScript R (in the FLRW limit ΩScript R → Ωk). If averaged over global scales the result depends on the assumed global model of the Universe. Within the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from ΩScript R =0 at the CMB to ΩScript R ~ 0.1 at 0z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ωk ≠ 0, even if in the early Universe Ωk = 0) and therefore when analysing low-z cosmological data one should keep Ωk as a free parameter and independent from the CMB constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badwan, F.M.; Herring, K.S.
1993-08-01
Many of the buildings at the Rocky Flats Plant were designed and built before modern standards were developed, including standards for protection against extreme natural phenomenon such as tornadoes, earthquakes, and floods. The purpose of the SEP is to establish an integrated approach to assessing the design adequacy of specific high and moderate hazard Rocky Flats facilities from a safety perspective and to establish a basis for defining any needed facility improvements. The SEP is to be carried out in three Phases. In Phase 1, topics to be evaluated and an evaluation plan for each topic were developed. Any differencesmore » between Current Design Requirements (CDR) or acceptance criteria and the design of existing facilities, will be identified during Phase 2 and assessed using an integrated systematic approach during Phase 3. The integrated assessment performed during Phase 3 provides a process for evaluating the differences between existing facility design and CDRs so that decisions on corrective actions can be made on the basis of relative risk reduction and cost effectiveness. These efforts will ensure that a balanced and integrated level of safety is achieved for long-term operation of these buildings. Through appropriate selection of topics and identification of the structures, systems, and components to be evaluated, the SEP will address outstanding design issues related to the prevention and mitigation of design basis accidents, including those arising from natural phenomena. The objective of the SEP is not to bring these buildings into strict compliance with current requirements, but rather to ensure that an adequate level of safety is achieved in an economical fashion.« less
NASA Astrophysics Data System (ADS)
Álvarez, Orlando; Gimenez, Mario; Folguera, Andres; Spagnotto, Silvana; Bustos, Emilce; Baez, Walter; Braitenberg, Carla
2015-11-01
Satellite-only gravity measurements and those integrated with terrestrial observations provide global gravity field models of unprecedented precision and spatial resolution, allowing the analysis of the lithospheric structure. We used the model EGM2008 (Earth Gravitational Model) to calculate the gravity anomaly and the vertical gravity gradient in the South Central Andes region, correcting these quantities by the topographic effect. Both quantities show a spatial relationship between the projected subduction of the Copiapó aseismic ridge (located at about 27°-30° S), its potential deformational effects in the overriding plate, and the Ojos del Salado-San Buenaventura volcanic lineament. This volcanic lineament constitutes a projection of the volcanic arc toward the retroarc zone, whose origin and development were not clearly understood. The analysis of the gravity anomalies, at the extrapolated zone of the Copiapó ridge beneath the continent, shows a change in the general NNE-trend of the Andean structures to an ENE-direction coincident with the area of the Ojos del Salado-San Buenaventura volcanic lineament. This anomalous pattern over the upper plate is interpreted to be linked with the subduction of the Copiapó ridge. We explore the relation between deformational effects and volcanism at the northern Chilean-Pampean flat slab and the collision of the Copiapó ridge, on the basis of the Moho geometry and elastic thicknesses calculated from the new satellite GOCE data. Neotectonic deformations interpreted in previous works associated with volcanic eruptions along the Ojos del Salado-San Buenaventura volcanic lineament is interpreted as caused by crustal doming, imprinted by the subduction of the Copiapó ridge, evidenced by crustal thickening at the sites of ridge inception along the trench. Finally, we propose that the Copiapó ridge could have controlled the northern edge of the Chilean-Pampean flat slab, due to higher buoyancy, similarly to the control that the Juan Fernandez ridge exerts in the geometry of the flat slab further south.
Magnetotelluric Data, Mid Valley, Nevada Test Site, Nevada
Williams, Jackie M.; Wallin, Erin L.; Rodriguez, Brian D.; Lindsey, Charles R.; Sampson, Jay A.
2007-01-01
Introduction The United States Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office (NSO) are addressing ground-water contamination resulting from historical underground nuclear testing through the Environmental Management (EM) program and, in particular, the Underground Test Area (UGTA) project. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow. Ground-water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Rainier Mesa/Shoshone Mountain Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey (USGS), in cooperation with the DOE and NNSA-NSO, collected and processed data at the Nevada Test Site in and near Yucca Flat (YF) to help define the character, thickness, and lateral extent of the pre-Tertiary confining units. We collected 51 magnetotelluric (MT) and audio-magnetotelluric (AMT), stations for that research. In early 2005 we extended that research with 26 additional MT data stations, located on and near Rainier Mesa and Shoshone Mountain (RM-SM). The new stations extended the area of the hydrogeologic study previously conducted in Yucca Flat. This work was done to help refine what is known about the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal was to define the upper clastic confining unit (UCCU). The UCCU is comprised of late Devonian to Mississippian siliciclastic rocks assigned to the Eleana Formation and Chainman Shale. The UCCU underlies the Yucca Flat area and extends westward towards Shoshone Mountain, southward to Buckboard Mesa, and northward to Rainier Mesa. Late in 2005 we collected another 14 MT stations in Mid Valley and in northern Yucca Flat basin. That work was done to better determine the extent and thickness of the UCCU near the southeastern RM-SM CAU boundary with the southwestern YF CAU, and also in the northern YF CAU. The purpose of this report is to release the MT data at those 14 stations. No interpretation of the data is included here.
Gravitational Radiation with a Positive Cosmological Constant
NASA Astrophysics Data System (ADS)
Bonga, Beatrice
Gravitational radiation is well-understood in spacetimes that are asymptotically flat. However, our Universe is currently expanding at an accelerated rate, which is best described by including a positive cosmological constant, Lambda, in Einstein's equations. Consequently, no matter how far one recedes from sources generating gravitational waves, spacetime curvature never dies and is not asymptotically flat. This dissertation provides first steps to incorporate Lambda in the study of gravitational radiation by analyzing linearized gravitational waves on a de Sitter background. Since the asymptotic structure of de Sitter is very different from that of Minkowski spacetime, many conceptual and technical difficulties arise. The limit Lambda → 0 can be discontinuous: Although energy carried by gravitational waves is always positive in Minkowski spacetime, it can be arbitrarily negative in de Sitter spacetime. Additionally, many of the standard techniques, including 1/r expansions, are no longer applicable. We generalize Einstein's celebrated quadrupole formula describing the power radiated on a flat background to de Sitter spacetime. Even a tiny Lambda brings in qualitatively new features such as contributions from pressure quadrupole moments. Nonetheless, corrections induced by Lambda are O(√Lambda tc) with tc the characteristic time scale of the source and are negligible for current gravitational wave observatories. We demonstrate this explicitly for a binary system in a circular orbit. Radiative modes are encoded in the transverse-traceless part of the spatial components of a gravitational perturbation. When Lambda = 0, one typically extracts these modes in the wave zone by projecting the gravitational perturbation onto the two-sphere orthogonal to the radial direction. We show that this method for waves emitted by spatially compact sources on Minkowski spacetime generically does not yield the transverse-traceless modes; not even infinitely far away. However, the difference between the transverse-traceless and projected modes is non-dynamical and disappears from all physical observables. When one is interested in 'Coulombic' information not captured by the radiative modes, the projection method does not suffice. This is, for example, important for angular momentum carried by gravitational waves. This result relies on Bondi-type expansions for asymptotically flat spacetimes. Therefore, the projection method is not applicable to de Sitter spacetimes.
Quasiparticles and charge transfer at the two surfaces of the honeycomb iridate Na2IrO3
NASA Astrophysics Data System (ADS)
Moreschini, L.; Lo Vecchio, I.; Breznay, N. P.; Moser, S.; Ulstrup, S.; Koch, R.; Wirjo, J.; Jozwiak, C.; Kim, K. S.; Rotenberg, E.; Bostwick, A.; Analytis, J. G.; Lanzara, A.
2017-10-01
Direct experimental investigations of the low-energy electronic structure of the Na2IrO3 iridate insulator are sparse and draw two conflicting pictures. One relies on flat bands and a clear gap, the other involves dispersive states approaching the Fermi level, pointing to surface metallicity. Here, by a combination of angle-resolved photoemission, photoemission electron microscopy, and x-ray absorption, we show that the correct picture is more complex and involves an anomalous band, arising from charge transfer from Na atoms to Ir-derived states. Bulk quasiparticles do exist, but in one of the two possible surface terminations the charge transfer is smaller and they remain elusive.
A Numerical Investigation of the Burnett Equations Based on the Second Law
NASA Technical Reports Server (NTRS)
Comeaux, Keith A.; Chapman, Dean R.; MacCormack, Robert W.; Edwards, Thomas A. (Technical Monitor)
1995-01-01
The Burnett equations have been shown to potentially violate the second law of thermodynamics. The objective of this investigation is to correlate the numerical problems experienced by the Burnett equations to the negative production of entropy. The equations have had a long history of numerical instability to small wavelength disturbances. Recently, Zhong corrected the instability problem and made solutions attainable for one dimensional shock waves and hypersonic blunt bodies. Difficulties still exist when attempting to solve hypersonic flat plate boundary layers and blunt body wake flows, however. Numerical experiments will include one-dimensional shock waves, quasi-one dimensional nozzles, and expanding Prandlt-Meyer flows and specifically examine the entropy production for these cases.
Quantum detection of wormholes.
Sabín, Carlos
2017-04-06
We show how to use quantum metrology to detect a wormhole. A coherent state of the electromagnetic field experiences a phase shift with a slight dependence on the throat radius of a possible distant wormhole. We show that this tiny correction is, in principle, detectable by homodyne measurements after long propagation lengths for a wide range of throat radii and distances to the wormhole, even if the detection takes place very far away from the throat, where the spacetime is very close to a flat geometry. We use realistic parameters from state-of-the-art long-baseline laser interferometry, both Earth-based and space-borne. The scheme is, in principle, robust to optical losses and initial mixedness.
Hair follicle nevus in a 2-year old.
Motegi, Sei-ichiro; Amano, Hiroo; Tamura, Atsushi; Ishikawa, Osamu
2008-01-01
We report a 2-year-old boy with an elastic soft, flatly elevated, skin-colored nodule on his nasal ala. Histologic examination revealed numerous small hair follicles in several stages of maturation in the dermis. Serial sections did not show any cartilage or a central epithelial lined cystic structure. Based on clinico-pathologic findings, we diagnosed this lesion as a hair follicle nevus. Hair follicle nevus is quite rare. Histologically, it is very important not to find cartilage or a central epithelial lined cystic structure for distinction from an accessory auricle and from a trichofolliculoma, respectively. Awareness of the clinical and pathologic characterization of hair follicle nevus is an aid to a correct diagnosis.
Effect of ice contamination on liquid-nitrogen drops in film boiling
NASA Technical Reports Server (NTRS)
Schoessow, G. J.; Chmielewski, C. E.; Baumeister, K. J.
1977-01-01
Previously reported vaporization time data of liquid nitrogen drops in film boiling on a flat plate are about 30 percent shorter than predicted from standard laminar film boiling theory. This theory, however, had been found to successfully correlate the data for conventional fluids such as water, ethanol, benzene, or carbon tetrachloride. This paper presents experimental evidence that some of the discrepancy for cryogenic fluids results from ice contamination due to condensation. The data indicate a fairly linear decrease in droplet evaporation time with the diameter of the ice crystal residue. After correcting the raw data for ice contamination along with convection, a comparison of theory with experiment shows good agreement.
Effect of ice contamination of liquid-nitrogen drops in film boiling
NASA Technical Reports Server (NTRS)
Schoessow, G. J.; Chmielewski, C. E.; Baumeister, K. J.
1977-01-01
Previously reported vaporization time data of liquid nitrogen drops in film boiling on a flat plate are about 30 percent shorter than predicted from standard laminar film boiling theory. This theory, however, had been found to successfully correlate the data for conventional fluids such as water, ethanol, benzene, or carbon tetrachloride. Experimental evidence that some of the discrepancy for cryogenic fluids results from ice contamination due to condensation is presented. The data indicate a fairly linear decrease in droplet evaporation time with the diameter of the ice crystal residue. After correcting the raw data for ice contamination along with convection, a comparison of theory with experiment shows good agreement.
Application of the algebraic RNG model for transition simulation. [renormalization group theory
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1990-01-01
The algebraic form of the RNG model of Yakhot and Orszag (1986) is investigated as a transition model for the Reynolds averaged boundary layer equations. It is found that the cubic equation for the eddy viscosity contains both a jump discontinuity and one spurious root. A yet unpublished transformation to a quartic equation is shown to remove the numerical difficulties associated with the discontinuity, but only at the expense of merging both the physical and spurious root of the cubic. Jumps between the branches of the resulting multiple-valued solution are found to lead to oscillations in flat plate transition calculations. Aside from the oscillations, the transition behavior is qualitatively correct.
AVIRIS data quality for coniferous canopy chemistry
NASA Technical Reports Server (NTRS)
Swanberg, Nancy A.
1988-01-01
An assessment of AVIRIS data quality for studying coniferous canopy chemistry was made. Seven flightlines of AVIRIS data were acquired over a transect of coniferous forest sites in central Oregon. Both geometric and radiometric properties of the data were examined including: pixel size, swath width, spectral position and signal-to-noise ratio. A flat-field correction was applied to AVIRIS data from a coniferous forest site. Future work with this data set will exclude data from spectrometers C and D due to low signal-to-noise ratios. Data from spectrometers A and B will be used to examine the relationship between the canopy chemical composition of the forest sites and AVIRIS spectral response.
Physics of heat pipe rewetting
NASA Technical Reports Server (NTRS)
Chan, S. H.
1992-01-01
Although several studies have been made to determine the rewetting characteristics of liquid films on heated rods, tubes, and flat plates, no solutions are yet available to describe the rewetting process of a hot plate subjected to a uniform heating. A model is presented to analyze the rewetting process of such plates with and without grooves. Approximate analytical solutions are presented for the prediction of the rewetting velocity and the transient temperature profiles of the plates. It is shown that the present rewetting velocity solution reduces correctly to the existing solution for the rewetting of an initially hot isothermal plate without heating from beneath the plate. Numerical solutions have also been obtained to validate the analytical solutions.
Clinical introduction of image lag correction for a cone beam CT system.
Stankovic, Uros; Ploeger, Lennert S; Sonke, Jan-Jakob; van Herk, Marcel
2016-03-01
Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate the effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors' in-house developed clinical cbct reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the saturation handling and to 9.6 ± 12.1 HU with the saturation handling, depending on the date of the calibration. The image lag correction parameters were stable over a period of 3 months. The computational load was increased by approximately 10%, not endangering the fast in-line reconstruction. The lag correction was successfully implemented clinically and removed most image lag artifacts thus improving the image quality. Image lag correction parameters were stable for 3 months indicating low frequency of calibration requirements.
The effect of the observer vantage point on perceived distortions in linear perspective images.
Todorović, Dejan
2009-01-01
Some features of linear perspective images may look distorted. Such distortions appear in two drawings by Jan Vredeman de Vries involving perceived elliptical, instead of circular, pillars and tilted, instead of upright, columns. Distortions may be due to factors intrinsic to the images, such as violations of the so-called Perkins's laws, or factors extrinsic to them, such as observing the images from positions different from their center of projection. When the correct projection centers for the two drawings were reconstructed, it was found that they were very close to the images and, therefore, practically unattainable in normal observation. In two experiments, enlarged versions of images were used as stimuli, making the positions of the projection centers attainable for observers. When observed from the correct positions, the perceived distortions disappeared or were greatly diminished. Distortions perceived from other positions were smaller than would be predicted by geometrical analyses, possibly due to flatness cues in the images. The results are relevant for the practical purposes of creating faithful impressions of 3-D spaces using 2-D images.
The spectral energy distribution of Zeta Puppis and HD 50896
NASA Technical Reports Server (NTRS)
Holm, A. V.; Cassinelli, J. P.
1977-01-01
The ultraviolet spectral energy distribution of the O5f star Zeta Pup and the WN5 star HD 50896 are derived from OAO-2 observations with the calibration of Bless, Code, and Fairchild (1976). An estimate of the interstellar reddening (0.12 magnitude) of the Wolf-Rayet star is determined from the size of the characteristic interstellar extinction bump at 4.6 inverse microns. After correction for extinction, both stars show a flat energy distribution in the ultraviolet. The distribution of HD 50896 from 1100 A to 2 microns is in good agreement with results of extended model atmospheres, but some uncertainty remains because of the interstellar-extinction correction. The absolute energy distribution of Zeta Pup is fitted by a 42,000-K plane-parallel model if the model's flux is adjusted for the effects of electron scattering in the stellar wind and for UV line blanketing that was determined empirically from high-resolution Copernicus satellite observations. To achieve this fit, it is necessary to push both the spectroscopically determined temperature and the ultraviolet calibration to the limits of their probable errors.
Extended DBI massive gravity with generalized fiducial metric
NASA Astrophysics Data System (ADS)
Chullaphan, Tossaporn; Tannukij, Lunchakorn; Wongjun, Pitayuth
2015-06-01
We consider an extended model of DBI massive gravity by generalizing the fiducial metric to be an induced metric on the brane corresponding to a domain wall moving in five-dimensional Schwarzschild-Anti-de Sitter spacetime. The model admits all solutions of FLRW metric including flat, closed and open geometries while the original one does not. The background solutions can be divided into two branches namely self-accelerating branch and normal branch. For the self-accelerating branch, the graviton mass plays the role of cosmological constant to drive the late-time acceleration of the universe. It is found that the number degrees of freedom of gravitational sector is not correct similar to the original DBI massive gravity. There are only two propagating degrees of freedom from tensor modes. For normal branch, we restrict our attention to a particular class of the solutions which provides an accelerated expansion of the universe. It is found that the number of degrees of freedom in the model is correct. However, at least one of them is ghost degree of freedom which always present at small scale implying that the theory is not stable.
Assessment of AVIRIS data from vegetated sites in the Owens Valley, California
NASA Technical Reports Server (NTRS)
Rock, B. N.; Elvidge, Christopher D.; Defeo, N. J.
1988-01-01
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were acquired from the Bishop, CA area, located at the northern end of the Owens Valley, on July 30, 1987. Radiometrically-corrected AVIRIS data were flat-field corrected, and spectral curves produced and analyzed for pixels taken from both native and cultivated vegetation sites, using the JPS SPAM software program and PC-based spreadsheet programs. Analyses focussed on the chlorophyll well and red edge portions of the spectral curves. Results include the following: AVIRIS spectral data are acquired at sufficient spectral resolution to allow detection of blue shifts of both the chlorophyll well and red edge in moisture-stressed vegetation when compared with non-stressed vegetation; a normalization of selected parameters (chlorophyll well and near infrared shoulder) may be used to emphasize the shift in red edge position; and the presence of the red edge in AVIRIS spectral curves may be useful in detecting small amounts (20 to 30 pct cover) of semi-arid and arid vegetation ground cover. A discussion of possible causes of AVIRIS red edge shifts in respsonse to stress is presented.
Chiral anomaly and anomalous finite-size conductivity in graphene
NASA Astrophysics Data System (ADS)
Shen, Shun-Qing; Li, Chang-An; Niu, Qian
2017-09-01
Graphene is a monolayer of carbon atoms packed into a hexagon lattice to host two spin degenerate pairs of massless two-dimensional Dirac fermions with different chirality. It is known that the existence of non-zero electric polarization in reduced momentum space which is associated with a hidden chiral symmetry will lead to the zero-energy flat band of a zigzag nanoribbon and some anomalous transport properties. Here it is proposed that the Adler-Bell-Jackiw chiral anomaly or non-conservation of chiral charges of Dirac fermions at different valleys can be realized in a confined ribbon of finite width, even in the absence of a magnetic field. In the laterally diffusive regime, the finite-size correction to conductivity is always positive and is inversely proportional to the square of the lateral dimension W, which is different from the finite-size correction inversely proportional to W from the boundary modes. This anomalous finite-size conductivity reveals the signature of the chiral anomaly in graphene, and it is measurable experimentally. This finding provides an alternative platform to explore the purely quantum mechanical effect in graphene.
New design studies for TRIUMF's ARIEL High Resolution Separator
NASA Astrophysics Data System (ADS)
Maloney, J. A.; Baartman, R.; Marchetto, M.
2016-06-01
As part of its new Advanced Rare IsotopE Laboratory (ARIEL), TRIUMF is designing a novel High Resolution Separator (HRS) (Maloney et al., 2015) to separate rare isotopes. The HRS has a 180° bend, separated into two 90° magnetic dipoles, bend radius 1.2 m, with an electrostatic multipole corrector between them. Second order correction comes mainly from the dipole edge curvatures, but is intended to be fine-tuned with a sextupole component and a small octupole component in the multipole. This combination is designed to achieve 1:20,000 resolution for a 3 μm (horizontal) and 6 μm (vertical) emittance. A design for the HRS dipole magnets achieves both radial and integral flatness goals of <10-5. A review of the optical design for the HRS is presented, including the study of limiting factors affecting separation, matching and aberration correction. Field simulations from the OPERA-3D (OPERA) [2] models of the dipole magnets are used in COSY Infinity (COSY) (Berz and Makino, 2005) [3] to find and optimize the transfer maps to 3rd order and study residual nonlinearities to 8th order.
Perturbative instability of inflationary cosmology from quantum potentials
NASA Astrophysics Data System (ADS)
Tawfik, A.; Diab, A.; Abou El Dahab, E.
2017-09-01
It was argued that the Raychaudhuri equation with a quantum correction term seems to avoid the Big Bang singularity and to characterize an everlasting Universe (Ali and Das in Phys Lett B 741:276, 2015). Critical comments on both conclusions and on the correctness of the key expressions of this work were discussed in literature (Lashin in Mod Phys Lett 31:1650044, 2016). In the present work, we have analyzed the perturbative (in)stability conditions in the inflationary era of the early Universe. We conclude that both unstable and stable modes are incompatible with the corresponding ones obtained in the standard FLRW Universe. We have shown that unstable modes do exist at small (an)isotropic perturbation and for different equations of state. Inequalities for both unstable and stable solutions with the standard FLRW space were derived. They reveal that in the FLRW flat Universe both perturbative instability and stability are likely. While negative stability modes have been obtained for radiation- and matter-dominated eras, merely, instability modes exist in case of a finite cosmological constant and also if the vacuum energy dominates the cosmic background geometry.
Shraiki, Mario; Arba-Mosquera, Samuel
2011-06-01
To evaluate ablation algorithms and temperature changes in laser refractive surgery. The model (virtual laser system [VLS]) simulates different physical effects of an entire surgical process, simulating the shot-by-shot ablation process based on a modeled beam profile. The model is comprehensive and directly considers applied correction; corneal geometry, including astigmatism; laser beam characteristics; and ablative spot properties. Pulse lists collected from actual treatments were used to simulate the temperature increase during the ablation process. Ablation efficiency reduction in the periphery resulted in a lower peripheral temperature increase. Steep corneas had lesser temperature increases than flat ones. The maximum rise in temperature depends on the spatial density of the ablation pulses. For the same number of ablative pulses, myopic corrections showed the highest temperature increase, followed by myopic astigmatism, mixed astigmatism, phototherapeutic keratectomy (PTK), hyperopic astigmatism, and hyperopic treatments. The proposed model can be used, at relatively low cost, for calibration, verification, and validation of the laser systems used for ablation processes and would directly improve the quality of the results.
The Snapshot A Star SurveY (SASSY)
NASA Astrophysics Data System (ADS)
Garani, Jasmine I.; Nielsen, Eric; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Jinfei Wang, Jason; Esposito, Thomas M.; Best, William M. J.; Bowler, Brendan; Dupuy, Trent; Ruffio, Jean-Baptiste
2018-01-01
The Snapshot A Star Survey (SASSY) is an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminous planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We present the results of a custom data reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present the results from the pipeline of a few stars from the survey with analysis of candidate companions. SASSY is sensitive to companions 600,000 times fainter than the host star withint the inner few arcseconds, allowing us to detect companions with masses ~8MJup at age 110 Myr. This work was supported by the Leadership Alliance's Summer Research Early Identification Program at Stanford University, the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2015-08-01
Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size > 1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.
NASA Astrophysics Data System (ADS)
Cohen, B. E.; Cassata, W.; Mark, D. F.; Tomkinson, T.; Lee, M. R.; Smith, C. L.
2015-12-01
All meteorites contain variable amounts of cosmogenic 38Ar and 36Ar produced during extraterrestrial exposure, and in order to calculate reliable 40Ar/39Ar ages this cosmogenic Ar must be removed from the total Ar budget. The amount of cosmogenic Ar has usually been calculated from the step-wise 38Ar/36Ar, minimum 36Ar/37Ar, or average 38Arcosmogenic/37Ar from the irradiated meteorite fragment. However, if Cl is present in the meteorite, then these values will be disturbed by Ar produced during laboratory neutron irradiation of Cl. Chlorine is likely to be a particular issue for the Nakhlite group of Martian meteorites, which can contain over 1000 ppm Cl [1]. An alternative method for the cosmogenic Ar correction uses the meteorite's exposure age as calculated from an un-irradiated fragment and step-wise production rates based on the measured Ca/K [2]. This calculation is independent of the Cl concentration. We applied this correction method to seven Nakhlites, analyzed in duplicate or triplicate. Selected samples were analyzed at both Lawrence Livermore National Laboratory and SUERC to ensure inter-laboratory reproducibility. We find that the cosmogenic argon correction of [2] has a significant influence on the ages calculated for individual steps, particularly for those at lower temperatures (i.e., differences of several tens of million years for some steps). The lower-temperature steps are more influenced by the alternate cosmogenic correction method of [2], as these analyses yielded higher concentrations of Cl-derived 38Ar. As a result, the Nakhlite data corrected using [2] yields step-heating spectra that are flat or nearly so across >70% of the release spectra (in contrast to downward-stepping spectra often reported for Nakhlite samples), allowing for the calculation of precise emplacement ages for these meteorites. [1] Cartwright J. A. et al. (2013) GCA, 105, 255-293. [2] Cassata W. S., and Borg L. E. (2015) 46th LPSC, Abstract #2742.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog (Thompson et al. 1995) and its supplement (Thompson et al. 1996), this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
Characterization and correction of charge-induced pixel shifts in DECam
Gruen, D.; Bernstein, G. M.; Jarvis, M.; ...
2015-05-28
Interaction of charges in CCDs with the already accumulated charge distribution causes both a flux dependence of the point-spread function (an increase of observed size with flux, also known as the brighter/fatter effect) and pixel-to-pixel correlations of the Poissonian noise in flat fields. We describe these effects in the Dark Energy Camera (DECam) with charge dependent shifts of effective pixel borders, i.e. the Antilogus et al. (2014) model, which we fit to measurements of flat-field Poissonian noise correlations. The latter fall off approximately as a power-law r -2.5 with pixel separation r, are isotropic except for an asymmetry in themore » direct neighbors along rows and columns, are stable in time, and are weakly dependent on wavelength. They show variations from chip to chip at the 20% level that correlate with the silicon resistivity. The charge shifts predicted by the model cause biased shape measurements, primarily due to their effect on bright stars, at levels exceeding weak lensing science requirements. We measure the flux dependence of star images and show that the effect can be mitigated by applying the reverse charge shifts at the pixel level during image processing. Differences in stellar size, however, remain significant due to residuals at larger distance from the centroid.« less
Driving spin transition at interface: Role of adsorption configurations
NASA Astrophysics Data System (ADS)
Zhang, Yachao
2018-01-01
A clear insight into the electrical manipulation of molecular spins at interface is crucial to the design of molecule-based spintronic devices. Here we report on the electrically driven spin transition in manganocene physisorbed on a metallic surface in two different adsorption configurations predicted by ab initio techniques, including a Hubbard-U correction at the manganese site and accounting for the long-range van der Waals interactions. We show that the application of an electric field at the interface induces a high-spin to low-spin transition in the flat-lying manganocene, while it could hardly alter the high-spin ground state of the standing-up molecule. This phenomenon cannot be explained by either the molecule-metal charge transfer or the local electron correlation effects. We demonstrate a linear dependence of the intra-molecular spin-state splitting on the energy difference between crystal-field splitting and on-site Coulomb repulsion. After considering the molecule-surface binding energy shifts upon spin transition, we reproduce the obtained spin-state energetics. We find that the configuration-dependent responses of the spin-transition originate from the binding energy shifts instead of the variation of the local ligand field. Through these analyses, we obtain an intuitive understanding of the effects of molecule-surface contact on spin-crossover under electrical bias.
Spectrographs and Large Telescopes: A Study of Instrumentation
NASA Astrophysics Data System (ADS)
Fica, Haley Diane; Crane, Jeffrey D.; Uomoto, Alan K.; Hare, Tyson
2017-01-01
It is a truth universally acknowledged, that a telescope in possession of a large aperture, must be in want of a high resolution spectrograph. Subsystems of these instruments require testing and upgrading to ensure that they can continue to be scientifically productive and usher in a new era of astronomical research. The Planet Finder Spectrograph (PFS) and Magellan Inamori Kyocera Echelle (MIKE), both on the Magellan II Clay telescope at Las Campanas Observatory, and the Giant Magellan Telescope (GMT) Consortium Large Earth Finder (G-CLEF) are examples of such instruments. Bluer flat field lamps were designed for PFS and MIKE to replace lamps no longer available in order to ensure continued, efficient functionality. These newly designed lamps will result in better flat fielding and calibration of data, and thus result in increased reduction of instrument noise. When it is built and installed in 2022, G-CLEF will be be fed by a tertiary mirror on the GMT. Stepper motors attached to the back of this mirror will be used to correct misalignments in the optical relay system. These motors were characterized to ensure that they function as expected to an accuracy of a few microns. These projects incorporate several key aspects of astronomical instrumentation: designing, building, and testing.
Yang, Shan; Wang, Yu-ting
2011-03-01
Based on the theories and methods of ecological footprint, the concept of marine ecological footprint was proposed. According to the characteristics of marine environment in Jiangsu Province, five sub-models of marine ecological footprints, including fishery, transporation, marine engineering construction, marine energy, and tidal flat, were constructed. The equilibrium factors of the five marine types were determined by using improved entropy method, and the marine footprints and capacities in Jiangsu Province from 2000 to 2008 were calculated and analyzed. In 2000-2008, the marine ecology footprint per capita in Jiangsu Province increased nearly seven times, from 36.90 hm2 to 252.94 hm2, and the ecological capacity per capita grew steadily, from 105.01 hm2 to 185.49 hm2. In 2000, the marine environment in the Province was in a state of ecological surplus, and the marine economy was in a weak sustainable development state. Since 2004, the marine ecological environment deteriorated sharply, with ecological deficit up to 109660.5 hm2, and the sustainability of marine economy declined. The high ecological footprint of fishery was the main reason for the ecological deficit. Tidal flat was the important reserve resource for the sustainable development of marine economy in Jiangsu Province.
High-energy X-ray diffraction using the Pixium 4700 flat-panel detector.
Daniels, J E; Drakopoulos, M
2009-07-01
The Pixium 4700 detector represents a significant step forward in detector technology for high-energy X-ray diffraction. The detector design is based on digital flat-panel technology, combining an amorphous Si panel with a CsI scintillator. The detector has a useful pixel array of 1910 x 2480 pixels with a pixel size of 154 microm x 154 microm, and thus it covers an effective area of 294 mm x 379 mm. Designed for medical imaging, the detector has good efficiency at high X-ray energies. Furthermore, it is capable of acquiring sequences of images at 7.5 frames per second in full image mode, and up to 60 frames per second in binned region of interest modes. Here, the basic properties of this detector applied to high-energy X-ray diffraction are presented. Quantitative comparisons with a widespread high-energy detector, the MAR345 image plate scanner, are shown. Other properties of the Pixium 4700 detector, including a narrow point-spread function and distortion-free image, allows for the acquisition of high-quality diffraction data at high X-ray energies. In addition, high frame rates and shutterless operation open new experimental possibilities. Also provided are the necessary data for the correction of images collected using the Pixium 4700 for diffraction purposes.
NASA Astrophysics Data System (ADS)
Nagy, Peter B.; Qu, Jianmin; Jacobs, Laurence J.
2014-02-01
A harmonic acoustic tone burst propagating through an elastic solid with quadratic nonlinearity produces not only a parallel burst of second harmonic but also an often neglected quasi-static pulse associated with the acoustic radiation-induced eigenstrain. Although initial analytical and experimental studies by Yost and Cantrell suggested that the pulse might have a right-angled triangular shape with the peak displacement at the leading edge being proportional to the length of the tone burst, more recent theoretical, analytical, numerical, and experimental studies proved that the pulse has a flat-top shape and the peak displacement is proportional to the propagation length. In this paper, analytical and numerical simulation results are presented to illustrate two types of finite-size effects. First, the finite axial dimension of the specimen cannot be simply accounted for by a linear reflection coefficient that neglects the nonlinear interaction between the combined incident and reflected fields. Second, the quasistatic pulse generated by a transducer of finite aperture suffers more severe divergence than both the fundamental and second harmonic pulses generated by the same transducer. These finite-size effects can make the top of the quasi-static pulse sloped rather than flat and therefore must be taken into consideration in the interpretation of experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagy, Peter B.; Qu, Jianmin; Jacobs, Laurence J.
A harmonic acoustic tone burst propagating through an elastic solid with quadratic nonlinearity produces not only a parallel burst of second harmonic but also an often neglected quasi-static pulse associated with the acoustic radiation-induced eigenstrain. Although initial analytical and experimental studies by Yost and Cantrell suggested that the pulse might have a right-angled triangular shape with the peak displacement at the leading edge being proportional to the length of the tone burst, more recent theoretical, analytical, numerical, and experimental studies proved that the pulse has a flat-top shape and the peak displacement is proportional to the propagation length. In thismore » paper, analytical and numerical simulation results are presented to illustrate two types of finite-size effects. First, the finite axial dimension of the specimen cannot be simply accounted for by a linear reflection coefficient that neglects the nonlinear interaction between the combined incident and reflected fields. Second, the quasistatic pulse generated by a transducer of finite aperture suffers more severe divergence than both the fundamental and second harmonic pulses generated by the same transducer. These finite-size effects can make the top of the quasi-static pulse sloped rather than flat and therefore must be taken into consideration in the interpretation of experimental data.« less
Flat-field anastigmatic mirror objective for high-magnification extreme ultraviolet microscopy
NASA Astrophysics Data System (ADS)
Toyoda, Mitsunori
2015-08-01
To apply high-definition microscopy to the extreme ultraviolet (EUV) region in practice, i.e. to enable in situ observation of living tissue and the at-wavelength inspection of lithography masks, we constructed a novel reflective objective made of three multilayer mirrors. This objective is configured as a two-stage imaging system made of a Schwarzschild two-mirror system as the primary objective and an additional magnifier with a single curved mirror. This two-stage configuration can provide a high magnification of 1500, which is suitable for real-time observation with an EUV charge coupled device (CCD) camera. Besides, since off-axis aberrations can be corrected by the magnifier, which provides field flattener optics, we are able to configure the objective as a flat-field anastigmatic system, in which we will have a diffraction-limited spatial resolution over a large field-of-view. This paper describes in detail the optical design of the present objective. After calculating the closed-form equations representing the third-order aberrations of the objective, we apply these equations to practical design examples with a numerical aperture of 0.25 and an operation wavelength of 13.5 nm. We also confirm the imaging performances of this novel design by using the numerical ray-tracing method.
SU-E-T-756: Tissue Inhomogeneity Corrections in Intra-Operative Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethi, A; Chinsky, B; Gros, S
Purpose: Investigate the impact of tissue inhomogeneities on dose distributions produced by low-energy X-rays in intra-operative radiotherapy (IORT). Methods: A 50-kV INTRABEAM X-ray device with superficial (Flat and Surface) applicators was commissioned at our institution. For each applicator, percent depth-dose (PDD), dose-profiles (DP) and output factors (OF) were obtained. Calibrated GaFchromic (EBT3) films were used to measure dose distributions in solid water phantom at various depths (2, 5, 10, and 15 mm). All recommended precautions for film-handling, film-exposure and scanning were observed. The effects of tissue inhomogeneities on dose distributions were examined by placing air-cavities and bone and tissue equivalentmore » materials of different density (ρ), atomic number (Z), and thickness (t = 0–4mm) between applicator and film detector. All inhomogeneities were modeled as a cylindrical cavity (diameter 25 mm). Treatment times were calculated to deliver 1Gy dose at 5mm depth. Film results were verified by repeat measurements with a thin-window parallel plate ion-chamber (PTW 34013A) in a water tank. Results: For a Flat-4cm applicator, the measured dose rate at 5mm depth in solid water was 0.35 Gy/min. Introduction of a cylindrical air-cavity resulted in an increased dose past the inhomogeneity. Compared to tissue equivalent medium, dose enhancement due to 1mm, 2mm, 3mm and 4mm air cavities was 10%, 16%, 24%, and 35% respectively. X-ray attenuation by 2mm thick cortical bone resulted in a significantly large (58%) dose decrease. Conclusion: IORT dose calculations assume homogeneous tissue equivalent medium. However, soft X-rays are easily affected by non-tissue equivalent materials. The results of this study may be used to estimate and correct IORT dose delivered in the presence of tissue inhomogeneities.« less
On the substance of a sophisticated epistemology
NASA Astrophysics Data System (ADS)
Elby, Andrew; Hammer, David
2001-09-01
Among researchers who study students' epistemologies, a consensus has emerged about what constitutes a sophisticated stance toward scientific knowledge. According to this community consensus, students should understand scientific knowledge as tentative and evolving, rather than certain and unchanging; subjectively tied to scientists' perspectives, rather than objectively inherent in nature; and individually or socially constructed, rather than discovered. Surveys, interview protocols, and other methods used to probe students' beliefs about scientific knowledge broadly reflect this outlook. This article questions the community consensus about epistemological sophistication. We do not suggest that scientific knowledge is objective and fixed; if forced to choose whether knowledge is certain or tentative, with no opportunity to elaborate, we would choose tentative. Instead, our critique consists of two lines of argument. First, the literature fails to distinguish between the correctness and productivity of an epistemological belief. For instance, elementary school students who believe that science is about discovering objective truths to questions, such as whether the earth is round or flat, or whether an asteroid led to the extinction of the dinosaurs, may be more likely to succeed in science than students who believe science is about telling stories that vary with one's perspective. Naïve realism, although incorrect (according to a broad consensus of philosophers and social scientists), may nonetheless be productive for helping those students learn. Second, according to the consensus view as reflected in commonly used surveys, epistemological sophistication consists of believing certain blanket generalizations about the nature of knowledge and learning, generalizations that do not attend to context. These generalizations are neither correct nor productive. For example, it would be unsophisticated for students to view as tentative the idea that the earth is round rather than flat. By contrast, they should take a more tentative stance toward theories of mass extinction. Nonetheless, many surveys and interview protocols tally students as sophisticated not for attending to these contextual nuances, but for subscribing broadly to the view that knowledge is tentative.
Assessment of a New High-Performance Small-Animal X-Ray Tomograph
NASA Astrophysics Data System (ADS)
Vaquero, J. J.; Redondo, S.; Lage, E.; Abella, M.; Sisniega, A.; Tapias, G.; Montenegro, M. L. Soto; Desco, M.
2008-06-01
We have developed a new X-ray cone-beam tomograph for in vivo small-animal imaging using a flat panel detector (CMOS technology with a microcolumnar CsI scintillator plate) and a microfocus X-ray source. The geometrical configuration was designed to achieve a spatial resolution of about 12 lpmm with a field of view appropriate for laboratory rodents. In order to achieve high performance with regard to per-animal screening time and cost, the acquisition software takes advantage of the highest frame rate of the detector and performs on-the-fly corrections on the detector raw data. These corrections include geometrical misalignments, sensor non-uniformities, and defective elements. The resulting image is then converted to attenuation values. We measured detector modulation transfer function (MTF), detector stability, system resolution, quality of the reconstructed tomographic images and radiated dose. The system resolution was measured following the standard test method ASTM E 1695 -95. For image quality evaluation, we assessed signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) as a function of the radiated dose. Dose studies for different imaging protocols were performed by introducing TLD dosimeters in representative organs of euthanized laboratory rats. Noise figure, measured as standard deviation, was 50 HU for a dose of 10 cGy. Effective dose with standard research protocols is below 200 mGy, confirming that the system is appropriate for in vivo imaging. Maximum spatial resolution achieved was better than 50 micron. Our experimental results obtained with image quality phantoms as well as with in-vivo studies show that the proposed configuration based on a CMOS flat panel detector and a small micro-focus X-ray tube leads to a compact design that provides good image quality and low radiated dose, and it could be used as an add-on for existing PET or SPECT scanners.
Kanellopoulos, Anastasios John; Asimellis, George
2015-07-01
To evaluate the safety, efficacy, and refractive and keratometric stability of myopic femtosecond laser in situ keratomileusis (LASIK) with concurrent prophylactic high-fluence corneal collagen crosslinking (CXL) compared with the outcomes of standard femtosecond LASIK. Private clinical practice, Athens, Greece. Consecutive randomized prospective comparative study. Eyes that had myopic LASIK or myopic LASIK with concurrent high-fluence CXL were evaluated preoperatively and up to 2 years postoperatively for manifest refraction spherical equivalent (MRSE), refractive astigmatism, visual acuity, corneal keratometry (K), and endothelial cell count. One hundred forty consecutive eyes had myopic LASIK; 65 of the eyes were treated additionally with CXL. In the LASIK-CXL eyes, the mean postoperative MRSE was -0.18 diopter (D) ± 17.0 (SD) from -6.67 ± 2.14 D preoperatively. The postoperative flat K was 37.67 D from 43.92 D, and the steep K was 38.38 D from 45.15 D. The correlation coefficient of SE correction predictability was 0.975. In the LASIK-only eyes, the mean postoperative MRSE was -0.32 ± 0.24 D from -5.49 ± 1.99 D preoperatively. The flat K was 38.04 D from 43.15 D, and the steep K was 38.69 D from 44.03 D. The correlation coefficient of SE correction predictability was 0.968. The differences between the 2 groups at the 20/20 and 20/25 levels were statistically significant (P = .045 and P = .039, respectively). Two-year results indicate that the application of prophylactic CXL concurrently with high-myopic LASIK appears to improve refractive and keratometric stability, presumably by affecting corneal biomechanical properties. Dr. Kanellopoulos is a consultant to Alcon Surgical, Inc., Wavelight Laser Technologie AG, Allergan, Inc., Avedro, Inc., and i-Optics Corp. Dr. Asimellis has no financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Crush testing, characterizing, and modeling the crashworthiness of composite laminates
NASA Astrophysics Data System (ADS)
Garner, David Michael, Jr.
Research in the field of crashworthiness of composite materials is presented. A new crush test method was produced to characterize the crush behavior of composite laminates. In addition, a model of the crush behavior and a method for rank ordering the energy absorption capability of various laminates were developed. The new crush test method was used for evaluating the crush behavior of flat carbon/epoxy composite specimens at quasi-static and dynamic rates. The University of Utah crush test fixture was designed to support the flat specimen against catastrophic buckling. A gap, where the specimen is unsupported, allowed unhindered crushing of the specimen. In addition, the specimen's failure modes could be clearly observed during crush testing. Extensive crush testing was conducted wherein the crush force and displacement data were collected to calculate the energy absorption, and high speed video was captured during dynamic testing. Crush tests were also performed over a range of fixture gap heights. The basic failure modes were buckling, crack growth, and fracture. Gap height variations resulted in poorly, properly, and overly constrained specimens. In addition, guidelines for designing a composite laminate for crashworthiness were developed. Modeling of the crush behavior consisted of the delamination and fracture of a single ply or group of like plies during crushing. Delamination crack extension was modeled using the mode I energy release rate, G lc, where an elastica approach was used to obtain the strain energy. Variations in Glc were briefly explored with double cantilever beam tests wherein crack extension occurred along a multidirectional ply interface. The model correctly predicted the failure modes for most of the test cases, and offered insight into how the input parameters affect the model. The ranking method related coefficients of the laminate and sublaminate stiffness matrices, the ply locations within the laminate, and the laminate thickness. The ranking method correctly ordered the laminates tested in this study with respect to their energy absorption.
Rosser, K E
1998-11-01
This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water for medium-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB code of practice for the UK secondary standard (NE2561/NE2611) ionization chamber are derived and their constituent factors examined. The comparison of the chambers' responses in air revealed that of the chambers tested only the NE2561, NE2571 and NE2505 exhibit a flat (within 5%) energy response in air. Under no circumstances should the NACP, Sanders electron chamber, or any chamber that has a wall made of high atomic number material, be used for medium-energy x-ray dosimetry. The measurements in water reveal that a chamber that has a substantial housing, such as the PTW Grenz chamber, should not be used to measure absorbed dose to water in this energy range. The value of k(ch) for an NE2561 chamber was determined by measuring the absorbed dose to water and comparing it with that for an NE2571 chamber, for which k(ch) data have been published. The chamber correction factor varies from 1.023 +/- 0.03 to 1.018 +/- 0.001 for x-ray beams with HVL between 0.15 and 4 mm Cu. The values agree with that for an NE2571 chamber within the experimental uncertainty. The corrections due to the stem, waterproof sleeve and replacement of the phantom material by the chamber for an NE2561 chamber are described.
Corrected Position Estimation in PET Detector Modules With Multi-Anode PMTs Using Neural Networks
NASA Astrophysics Data System (ADS)
Aliaga, R. J.; Martinez, J. D.; Gadea, R.; Sebastia, A.; Benlloch, J. M.; Sanchez, F.; Pavon, N.; Lerche, Ch.
2006-06-01
This paper studies the use of Neural Networks (NNs) for estimating the position of impinging photons in gamma ray detector modules for PET cameras based on continuous scintillators and Multi-Anode Photomultiplier Tubes (MA-PMTs). The detector under study is composed of a 49/spl times/49/spl times/10 mm/sup 3/ continuous slab of LSO coupled to a flat panel H8500 MA-PMT. Four digitized signals from a charge division circuit, which collects currents from the 8/spl times/8 anode matrix of the photomultiplier, are used as inputs to the NN, thus reducing drastically the number of electronic channels required. We have simulated the computation of the position for 511 keV gamma photons impacting perpendicularly to the detector surface. Thus, we have performed a thorough analysis of the NN architecture and training procedures in order to achieve the best results in terms of spatial resolution and bias correction. Results obtained using GEANT4 simulation toolkit show a resolution of 1.3 mm/1.9 mm FWHM at the center/edge of the detector and less than 1 mm of systematic error in the position near the edges of the scintillator. The results confirm that NNs can partially model and correct the non-uniform detector response using only the position-weighted signals from a simple 2D DPC circuit. Linearity degradation for oblique incidence is also investigated. Finally, the NN can be implemented in hardware for parallel real time corrected Line-of-Response (LOR) estimation. Results on resources occupancy and throughput in FPGA are presented.
Research on the magnetorheological finishing (MRF) technology with dual polishing heads
NASA Astrophysics Data System (ADS)
Huang, Wen; Zhang, Yunfei; He, Jianguo; Zheng, Yongcheng; Luo, Qing; Hou, Jing; Yuan, Zhigang
2014-08-01
Magnetorheological finishing (MRF) is a key polishing technique capable of rapidly converging to the required surface figure. Due to the deficiency of general one-polishing-head MRF technology, a dual polishing heads MRF technology was studied and a dual polishing heads MRF machine with 8 axes was developed. The machine has the ability to manufacture large aperture optics with high figure accuracy. The large polishing head is suitable for polishing large aperture optics, controlling large spatial length's wave structures, correcting low-medium frequency errors with high removal rates. While the small polishing head has more advantages in manufacturing small aperture optics, controlling small spatial wavelength's wave structures, correcting mid-high frequency and removing nanoscale materials. Material removal characteristic and figure correction ability for each of large and small polishing head was studied. Each of two polishing heads respectively acquired stable and valid polishing removal function and ultra-precision flat sample. After a single polishing iteration using small polishing head, the figure error in 45mm diameter of a 50 mm diameter plano optics was significantly improved from 0.21λ to 0.08λ by PV (RMS 0.053λ to 0.015λ). After three polishing iterations using large polishing head , the figure error in 410mm×410mm of a 430mm×430mm large plano optics was significantly improved from 0.40λ to 0.10λ by PV (RMS 0.068λ to 0.013λ) .This results show that the dual polishing heads MRF machine not only have good material removal stability, but also excellent figure correction capability.
Geometric correction methods for Timepix based large area detectors
NASA Astrophysics Data System (ADS)
Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.
2017-01-01
X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.
On the impact of topography and building mask on time varying gravity due to local hydrology
NASA Astrophysics Data System (ADS)
Deville, S.; Jacob, T.; Chéry, J.; Champollion, C.
2013-01-01
We use 3 yr of surface absolute gravity measurements at three sites on the Larzac plateau (France) to quantify the changes induced by topography and the building on gravity time-series, with respect to an idealized infinite slab approximation. Indeed, local topography and buildings housing ground-based gravity measurement have an effect on the distribution of water storage changes, therefore affecting the associated gravity signal. We first calculate the effects of surrounding topography and building dimensions on the gravity attraction for a uniform layer of water. We show that a gravimetric interpretation of water storage change using an infinite slab, the so-called Bouguer approximation, is generally not suitable. We propose to split the time varying gravity signal in two parts (1) a surface component including topographic and building effects (2) a deep component associated to underground water transfer. A reservoir modelling scheme is herein presented to remove the local site effects and to invert for the effective hydrological properties of the unsaturated zone. We show that effective time constants associated to water transfer vary greatly from site to site. We propose that our modelling scheme can be used to correct for the local site effects on gravity at any site presenting a departure from a flat topography. Depending on sites, the corrected signal can exceed measured values by 5-15 μGal, corresponding to 120-380 mm of water using the Bouguer slab formula. Our approach only requires the knowledge of daily precipitation corrected for evapotranspiration. Therefore, it can be a useful tool to correct any kind of gravimetric time-series data.
Entanglement entropy from tensor network states for stabilizer codes
NASA Astrophysics Data System (ADS)
He, Huan; Zheng, Yunqin; Bernevig, B. Andrei; Regnault, Nicolas
2018-03-01
In this paper, we present the construction of tensor network states (TNS) for some of the degenerate ground states of three-dimensional (3D) stabilizer codes. We then use the TNS formalism to obtain the entanglement spectrum and entropy of these ground states for some special cuts. In particular, we work out examples of the 3D toric code, the X-cube model, and the Haah code. The latter two models belong to the category of "fracton" models proposed recently, while the first one belongs to the conventional topological phases. We mention the cases for which the entanglement entropy and spectrum can be calculated exactly: For these, the constructed TNS is a singular value decomposition (SVD) of the ground states with respect to particular entanglement cuts. Apart from the area law, the entanglement entropies also have constant and linear corrections for the fracton models, while the entanglement entropies for the toric code models only have constant corrections. For the cuts we consider, the entanglement spectra of these three models are completely flat. We also conjecture that the negative linear correction to the area law is a signature of extensive ground-state degeneracy. Moreover, the transfer matrices of these TNSs can be constructed. We show that the transfer matrices are projectors whose eigenvalues are either 1 or 0. The number of nonzero eigenvalues is tightly related to the ground-state degeneracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kantowski, Ronald; Chen Bin; Dai Xinyu, E-mail: kantowski@nhn.ou.ed, E-mail: Bin.Chen-1@ou.ed, E-mail: dai@nhn.ou.ed
We compute the deflection angle to order (m/r {sub 0}){sup 2} and m/r{sub 0} x {Lambda}r {sup 2}{sub 0} for a light ray traveling in a flat {Lambda}CDM cosmology that encounters a completely condensed mass region. We use a Swiss cheese model for the inhomogeneities and find that the most significant correction to the Einstein angle occurs not because of the nonlinear terms but instead occurs because the condensed mass is embedded in a background cosmology. The Swiss cheese model predicts a decrease in the deflection angle of {approx}2% for weakly lensed galaxies behind the rich cluster A1689 and thatmore » the reduction can be as large as {approx}5% for similar rich clusters at z {approx} 1. Weak-lensing deflection angles caused by galaxies can likewise be reduced by as much as {approx}4%. We show that the lowest order correction in which {Lambda} appears is proportional to m/r{sub 0} x {radical}({Lambda}r{sub 0}{sup 2}) and could cause as much as a {approx}0.02% increase in the deflection angle for light that passes through a rich cluster. The lowest order nonlinear correction in the mass is proportional to m/r{sub 0}x{radical}(m/r{sub 0}) and can increase the deflection angle by {approx}0.005% for weak lensing by galaxies.« less
NASA Astrophysics Data System (ADS)
Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero
2016-10-01
In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
U-values of flat and domed skylights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klems, Joseph H.
1999-10-01
Data from nighttime measurements of the net heat flow through several types of skylights is presented. A well-known thermal test facility was reconfigured to measure the net heat flow through the bottom of a skylight/light well combination. Use of this data to determine the U-factor of the skylight is considerably more complicated than the analogous problem of a vertical fenestration contained in a test mask. Correction of the data for heat flow through the skylight well surfaces and evidence for the nature of the heat transfer between the skylight and the bottom of the well is discussed. The resulting measuredmore » U-values are presented and compared with calculations using the WINDOW4 and THERM programs.« less
ΛCDM Cosmology for Astronomers
NASA Astrophysics Data System (ADS)
Condon, J. J.; Matthews, A. M.
2018-07-01
The homogeneous, isotropic, and flat ΛCDM universe favored by observations of the cosmic microwave background can be described using only Euclidean geometry, locally correct Newtonian mechanics, and the basic postulates of special and general relativity. We present simple derivations of the most useful equations connecting astronomical observables (redshift, flux density, angular diameter, brightness, local space density, ...) with the corresponding intrinsic properties of distant sources (lookback time, distance, spectral luminosity, linear size, specific intensity, source counts, ...). We also present an analytic equation for lookback time that is accurate within 0.1% for all redshifts z. The exact equation for comoving distance is an elliptic integral that must be evaluated numerically, but we found a simple approximation with errors <0.2% for all redshifts up to z ≈ 50.
Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds
NASA Astrophysics Data System (ADS)
Mitra, Arpita
2017-12-01
The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.
BFV approach to geometric quantization
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1994-12-01
A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Brown, James W.; Evans, Robert H.
1988-01-01
The radiance reflected from a plane-parallel atmosphere and flat sea surface in the absence of aerosols has been determined with an exact multiple scattering code to improve the analysis of Nimbus-7 CZCS imagery. It is shown that the single scattering approximation normally used to compute this radiance can result in errors of up to 5 percent for small and moderate solar zenith angles. A scheme to include the effect of variations in the surface pressure in the exact computation of the Rayleigh radiance is discussed. The results of an application of these computations to CZCS imagery suggest that accurate atmospheric corrections can be obtained for solar zenith angles at least as large as 65 deg.
Performance of Harshaw TLD-100H two-element Dosemeter.
Luo, L Z; Rotunda, J E
2006-01-01
One of the advantages of LiF based thermoluminescent (TL) materials is its tissue-equivalent property. The Harshaw TLD-100H (LiF:Mg,Cu,P) material has demonstrated that it has a near-flat photon energy response and high sensitivity. With the optimized dosemeter filters built into the holder, the Harshaw TLD-100H two-element dosemeter can be used as a whole body personnel dosemeter for gamma, X ray and beta monitoring without the use of an algorithm or correction factor. This paper presents the dose performance of the Harshaw TLD-100H two-element dosemeter against the ANSI N13.11-2001 standard and the results of tests that are required in IEC 1066 International Standard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Wang, W. X.; Ethier, S.
Intrinsic toroidal rotation of the deuterium main ions in the core of the DIII-D tokamak is observed to transition from flat to hollow, forming an off-axis peak, above a threshold level of direct electron heating. Nonlinear gyrokinetic simulations show that the residual stress associated with electrostatic ion temperature gradient turbulence possesses the correct radial location and stress structure to cause the observed hollow rotation profile. Residual stress momentum flux in the gyrokinetic simulations is balanced by turbulent momentum diffusion, with negligible contributions from turbulent pinch. Finally, the prediction of the velocity profile by integrating the momentum balance equation produces amore » rotation profile that qualitatively and quantitatively agrees with the measured main-ion profile, demonstrating that fluctuation-induced residual stress can drive the observed intrinsic velocity profile.« less
Laser multipass system with interior cell configuration.
Borysow, Jacek; Kostinski, Alexander; Fink, Manfred
2011-10-20
We ask whether it is possible to restore a multipass system alignment after a gas cell is inserted in the central region. Indeed, it is possible, and we report on a remarkably simple rearrangement of a laser multipass system, composed of two spherical mirrors and a gas cell with flat windows in the middle. For example, for a window of thickness d and refractive index of n, adjusting the mirror separation by ≈2d(1-1/n) is sufficient to preserve the laser beam alignment and tracing. This expression is in agreement with ray-tracing computations and our laboratory experiment. Insofar as our solution corrects for spherical aberrations, it may also find applications in microscopy. © 2011 Optical Society of America
On muon energy spectrum in muon groups underground
NASA Technical Reports Server (NTRS)
Bakatanov, V. N.; Chudakov, A. E.; Novoseltsev, Y. F.; Novoseltseva, M. V.; Stenkin, Y. V.
1985-01-01
A method is described which was used to measure muon energy spectrum characteristics in muon groups underground using mu-e decays recording. The Baksan Telescope's experimental data on mu-e decays intensity in muon groups of various multiplicities are analyzed. The experimental data indicating very flat spectrum does not however represent the total spectrum in muon groups. Obviously the muon energy spectrum depends strongly on a distance from the group axis. The core attraction effect makes a significant distortion, making the spectrum flatter. After taking this into account and making corrections for this effect the integral total spectrum index in groups has a very small depencence on muon multiplicity and agrees well with expected one: beta=beta (sub expected) = 1.75.
Spectral Atlas of X-ray Lines Emitted During Solar Flares Based on CHIANTI
NASA Technical Reports Server (NTRS)
Landi, E.; Phillips, K. J. H.
2005-01-01
A spectral atlas of X-ray lines in the wavelength range 7.47-18.97 Angstroms is presented, based on high-resolution spectra obtained during two M-class solar flares (on 1980 August 25 and 1985 July 2) with the Flat Crystal Spectrometer on board the Solar Maximum Mission. The physical properties of the flaring plasmas are derived as a function of time using strong, isolated lines. From these properties predicted spectra using the CHIANTI database have been obtained which were then compared with wavelengths and fluxes of lines in the observed spectra to establish line identifications. identifications for nearly all the observed lines in the resulting atlas are given, with some significant corrections to previous analysis of these flare spectra.
Astrometrica: Astrometric data reduction of CCD images
NASA Astrophysics Data System (ADS)
Raab, Herbert
2012-03-01
Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.
Thermal and electrical contact conductance studies
NASA Technical Reports Server (NTRS)
Vansciver, S. W.; Nilles, M.
1985-01-01
Prediction of electrical and thermal contact resistance for pressed, nominally flat contacts is complicated by the large number of variables which influence contact formation. This is reflected in experimental results as a wide variation in contact resistances, spanning up to six orders of magnitude. A series of experiments were performed to observe the effects of oxidation and surface roughness on contact resistance. Electrical contact resistance and thermal contact conductance from 4 to 290 K on OFHC Cu contacts are reported. Electrical contact resistance was measured with a 4-wire DC technique. Thermal contact conductance was determined by steady-state longitudinal heat flow. Corrections for the bulk contribution ot the overall measured resistance were made, with the remaining resistance due solely to the presence of the contact.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Mud flats. 230.42 Section 230.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING SECTION 404(b)(1... Aquatic Sites § 230.42 Mud flats. (a) Mud flats are broad flat areas along the sea coast and in coastal...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Mud flats. 230.42 Section 230.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING SECTION 404(b)(1... Aquatic Sites § 230.42 Mud flats. (a) Mud flats are broad flat areas along the sea coast and in coastal...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Mud flats. 230.42 Section 230.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING SECTION 404(b)(1... Aquatic Sites § 230.42 Mud flats. (a) Mud flats are broad flat areas along the sea coast and in coastal...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Mud flats. 230.42 Section 230.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING SECTION 404(b)(1... Aquatic Sites § 230.42 Mud flats. (a) Mud flats are broad flat areas along the sea coast and in coastal...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Mud flats. 230.42 Section 230.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING SECTION 404(b)(1... Aquatic Sites § 230.42 Mud flats. (a) Mud flats are broad flat areas along the sea coast and in coastal...
Environmental magnetic methods for detecting and mapping contaminated sediments in lakes
NASA Astrophysics Data System (ADS)
Boyce, J. I.
2009-05-01
The remediation of contaminated sediments is an urgent environmental priority in the Great Lakes and requires detailed mapping of impacted sediment layer thickness, areal distribution and pollutant levels. Magnetic property measurements of sediment cores from two heavily polluted basins in Lake Ontario (Hamilton Harbour, Frenchman's Bay) show that concentrations of hydrocarbons (PAH) and a number of heavy metals (Pb, As, Ni, Cu, Cr, Zn, Cd, Fe) are strongly correlated with magnetic susceptibility. The magnetic susceptibility contrast between the contaminated sediment and underlying 'pre-colonial' sediments is sufficient to generate a total field anomaly (ca. 2-20 nT) that can be measured with a magnetometer towed above the lake bed. Systematic magnetic surveying (550 line km) of Hamilton Harbour using a towed marine magnetometer clearly identifies a number of well-defined magnetic anomalies that coincide with known accumulations of contaminated lake sediment. When calibrated against in-situ magnetic property measurements, the modeled apparent susceptibility from magnetic survey results can be used to classify the relative contaminant impact levels. The results demonstrate the potential of magnetic property measurements for rapid reconnaissance mapping of large areas of bottom contamination prior to detailed coring and sediment remediation.
Fraile, Pedro
2010-01-01
With the economic and social changes in Europe at the end of the sixteenth century and the formation and consolidation of an urban network throughout the continent, questions such as poverty, sanitation, and hygiene began to pose acute problems in the cities of the age. A new school of thought, known in Spain as Ciencia de Policía and in the Mediterranean area as Policy Science, proposed solutions for these problems and tested them through practical interventions inside the urban setting. In this article the author compares the work of two thinkers: Cristóbal Pérez de Herrera, a Spaniard, and Nicolas Delamare, a Frenchman. Writing in the late sixteenth and early seventeenth centuries, Pérez de Herrera examined the organization of Madrid, the newly founded (though still not firmly established) capital of Spain. Delamare based his study on the Paris of the early eighteenth century. The author stresses the coincidences in some of the ideas of both thinkers and shows how their writings begin to embody a new idea of the city, many aspects of which have survived until the present day.
Responses of estuarine circulation and salinity to the loss of intertidal flats – A modeling study
Yang, Zhaoqing; Wang, Taiping
2015-08-25
Intertidal flats in estuaries are coastal wetlands that provide critical marine habitats to support wide ranges of marine species. Over the last century many estuarine systems have experienced significant loss of intertidal flats due to anthropogenic impacts. This paper presents a modeling study conducted to investigate the responses of estuarine hydrodynamics to the loss of intertidal flats caused by anthropogenic actions in Whidbey Basin of Puget Sound on the northwest coast of North America. Changes in salinity intrusion limits in the estuaries, salinity stratification, and circulation in intertidal flats and estuaries were evaluated by comparing model results under the existingmore » baseline condition and the no-flat condition. Model results showed that loss of intertidal flats results in an increase in salinity intrusion, stronger mixing, and a phase shift in salinity and velocity fields in the bay front areas. Model results also indicated that loss of intertidal flats enhances two-layer circulation, especially the bottom water intrusion. Loss of intertidal flats increases the mean salinity but reduces the salinity range in the subtidal flats over a tidal cycle because of increased mixing. Salinity intrusion limits extend upstream in all three major rivers discharging into Whidbey Basin when no intertidal flats are present. Changes in salinity intrusion and estuarine circulation patterns due to loss of intertidal flats affect the nearshore habitat and water quality in estuaries and potentially increase risk of coastal hazards, such as storm surge and coastal flooding. Furthermore, model results suggested the importance of including intertidal flats and the wetting-and-drying process in hydrodynamic simulations when intertidal flats are present in the model domain.« less
Clinical introduction of image lag correction for a cone beam CT system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stankovic, Uros; Ploeger, Lennert S.; Sonke, Jan-Jakob, E-mail: j.sonke@nki.nl
Purpose: Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography (CBCT) has a degrading effect on CBCT image quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate themore » effect of the correction on the quality of CBCT images and evaluate the required calibration frequency. Methods: Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors’ in-house developed clinical CBCT reconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling. Results: Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the saturation handling and to 9.6 ± 12.1 HU with the saturation handling, depending on the date of the calibration. The image lag correction parameters were stable over a period of 3 months. The computational load was increased by approximately 10%, not endangering the fast in-line reconstruction. Conclusions: The lag correction was successfully implemented clinically and removed most image lag artifacts thus improving the image quality. Image lag correction parameters were stable for 3 months indicating low frequency of calibration requirements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Wang, Taiping
Intertidal flats in estuaries are coastal wetlands that provide critical marine habitats to support wide ranges of marine species. Over the last century many estuarine systems have experienced significant loss of intertidal flats due to anthropogenic impacts. This paper presents a modeling study conducted to investigate the responses of estuarine hydrodynamics to the loss of intertidal flats caused by anthropogenic actions in Whidbey Basin of Puget Sound on the northwest coast of North America. Changes in salinity intrusion limits in the estuaries, salinity stratification, and circulation in intertidal flats and estuaries were evaluated by comparing model results under the existingmore » baseline condition and the no-flat condition. Model results showed that loss of intertidal flats results in an increase in salinity intrusion, stronger mixing, and a phase shift in salinity and velocity fields in the bay front areas. Model results also indicated that loss of intertidal flats enhances two-layer circulation, especially the bottom water intrusion. Loss of intertidal flats increases the mean salinity but reduces the salinity range in the subtidal flats over a tidal cycle because of increased mixing. Salinity intrusion limits extend upstream in all three major rivers discharging into Whidbey Basin when no intertidal flats are present. Changes in salinity intrusion and estuarine circulation patterns due to loss of intertidal flats affect the nearshore habitat and water quality in estuaries and potentially increase risk of coastal hazards, such as storm surge and coastal flooding. Furthermore, model results suggested the importance of including intertidal flats and the wetting-and-drying process in hydrodynamic simulations when intertidal flats are present in the model domain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiandra, Christian; Fusella, Marco; Filippi, Andrea Riccardo
2013-08-15
Purpose: Patient-specific quality assurance in volumetric modulated arc therapy (VMAT) brain stereotactic radiosurgery raises specific issues on dosimetric procedures, mainly represented by the small radiation fields associated with the lack of lateral electronic equilibrium, the need of small detectors and the high dose delivered (up to 30 Gy). Gafchromic{sup TM} EBT2 and EBT3 films may be considered the dosimeter of choice, and the authors here provide some additional data about uniformity correction for this new generation of radiochromic films.Methods: A new analysis method using blue channel for marker dye correction was proposed for uniformity correction both for EBT2 and EBT3more » films. Symmetry, flatness, and field-width of a reference field were analyzed to provide an evaluation in a high-spatial resolution of the film uniformity for EBT3. Absolute doses were compared with thermoluminescent dosimeters (TLD) as baseline. VMAT plans with multiple noncoplanar arcs were generated with a treatment planning system on a selected pool of eleven patients with cranial lesions and then recalculated on a water-equivalent plastic phantom by Monte Carlo algorithm for patient-specific QA. 2D quantitative dose comparison parameters were calculated, for the computed and measured dose distributions, and tested for statistically significant differences.Results: Sensitometric curves showed a different behavior above dose of 5 Gy for EBT2 and EBT3 films; with the use of inhouse marker-dye correction method, the authors obtained values of 2.5% for flatness, 1.5% of symmetry, and a field width of 4.8 cm for a 5 × 5 cm{sup 2} reference field. Compared with TLD and selecting a 5% dose tolerance, the percentage of points with ICRU index below 1 was 100% for EBT2 and 83% for EBT3. Patients analysis revealed statistically significant differences (p < 0.05) between EBT2 and EBT3 in the percentage of points with gamma values <1 (p= 0.009 and p= 0.016); the percent difference as well as the mean difference between calculated and measured isodoses (20% and 80%) were found not to be significant (p= 0.074, p= 0.185, and p= 0.57).Conclusions: Excellent performances in terms of dose homogeneity were obtained using a new blue channel method for marker-dye correction on both EBT2 and EBT3 Gafchromic{sup TM} films. In comparison with TLD, the passing rates for the EBT2 film were higher than for EBT3; a good agreement with estimated data by Monte Carlo algorithm was found for both films, with some statistically significant differences again in favor of EBT2. These results suggest that the use of Gafchromic{sup TM} EBT2 and EBT3 films is appropriate for dose verification measurements in VMAT stereotactic radiosurgery; taking into account the uncertainty associated with Gafchromic film dosimetry, the use of adequate action levels is strongly advised, in particular, for EBT3.« less
Flat space (higher spin) gravity with chemical potentials
NASA Astrophysics Data System (ADS)
Gary, Michael; Grumiller, Daniel; Riegler, Max; Rosseel, Jan
2015-01-01
We introduce flat space spin-3 gravity in the presence of chemical potentials and discuss some applications to flat space cosmology solutions, their entropy, free energy and flat space orbifold singularity resolution. Our results include flat space Einstein gravity with chemical potentials as special case. We discover novel types of phase transitions between flat space cosmologies with spin-3 hair and show that the branch that continuously connects to spin-2 gravity becomes thermodynamically unstable for sufficiently large temperature or spin-3 chemical potential.
Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji
2002-06-01
Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.
Flat epithelial atypia and atypical ductal hyperplasia: carcinoma underestimation rate.
Ingegnoli, Anna; d'Aloia, Cecilia; Frattaruolo, Antonia; Pallavera, Lara; Martella, Eugenia; Crisi, Girolamo; Zompatori, Maurizio
2010-01-01
This study was carried out to determine the underestimation rate of carcinoma upon surgical biopsy after a diagnosis of flat epithelial atypia and atypical ductal hyperplasia and 11-gauge vacuum-assisted breast biopsy. A retrospective review was conducted of 476 vacuum-assisted breast biopsy performed from May 2005 to January 2007 and a total of 70 cases of atypia were identified. Fifty cases (71%) were categorized as pure atypical ductal hyperplasia, 18 (26%) as pure flat epithelial atypia and two (3%) as concomitant flat epithelial atypia and atypical ductal hyperplasia. Each group were compared with the subsequent open surgical specimens. Surgical biopsy was performed in 44 patients with atypical ductal hyperplasia, 15 patients with flat epithelial atypia, and two patients with flat epithelial atypia and atypical ductal hyperplasia. Five cases of atypical ductal hyperplasia were upgraded to ductal carcinoma in situ, three cases of flat epithelial atypia yielded one ductal carcinoma in situ and two cases of invasive ductal carcinoma, and one case of flat epithelial atypia/atypical ductal hyperplasia had invasive ductal carcinoma. The overall rate of malignancy was 16% for atypical ductal hyperplasia (including flat epithelial atypia/atypical ductal hyperplasia patients) and 20% for flat epithelial atypia. The presence of flat epithelial atypia and atypical ductal hyperplasia at biopsy requires careful consideration, and surgical excision should be suggested.
Magnetotelluric Data, Mid Valley, Nevada Test Site, Nevada.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackie M. Williams; Erin L. Wallin; Brian D. Rodriguez
2007-08-15
The United States Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office (NSO) are addressing ground-water contamination resulting from historical underground nuclear testing through the Environmental Management (EM) program and, in particular, the Underground Test Area (UGTA) project. One issue of concern is the nature of the somewhat poorly constrained pre-Tertiary geology and its effects on ground-water flow. Ground-water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Rainier Mesa/Shoshone Mountain Corrective Action Unit (CAU) (Bechtel Nevada, 2006).more » During 2003, the U.S. Geological Survey (USGS), in cooperation with the DOE and NNSA-NSO, collected and processed data at the Nevada Test Site in and near Yucca Flat (YF) to help define the character, thickness, and lateral extent of the pre-tertiary confining units. We collected 51 magnetotelluric (MT) and audio-magnetotelluric (AMT), stations for that research (Williams and others, 2005a, 2005b, 2005c, 2005d, 2005e, 2005f). In early 2005 we extended that research with 26 additional MT data stations (Williams and others, 2006), located on and near Rainier Mesa and Shoshone Mountain (RM-SM). The new stations extended the area of the hydrogeologic study previously conducted in Yucca Flat. This work was done to help refine what is known about the character, thickness, and lateral extent of pre-Tertiary confining units. In particular, a major goal was to define the upper clastic confining unit (UCCU). The UCCU is comprised of late Devonian to Mississippian siliciclastic rocks assigned to the Eleana Formation and Chainman Shale. The UCCU underlies the Yucca Flat area and extends westward towards Shoshone Mountain, southward to Buckboard Mesa, and northward to Rainier Mesa. Late in 2005 we collected another 14 MT stations in Mid Valley and in northern Yucca Flat basin. That work was done to better determine the extent and thickness of the UCCU near the southeastern RM-SM CAU boundary with the southwestern YF CAU, and also in the northern YF CAU. The purpose of this report is to release the MT data at those 14 stations shown in figure 1. No interpretation of the data is included here.« less
Zhang, Rongxiao; Glaser, Adam K.; Gladstone, David J.; Fox, Colleen J.; Pogue, Brian W.
2013-01-01
Purpose: Čerenkov radiation emission occurs in all tissue, when charged particles (either primary or secondary) travel at velocity above the threshold for the Čerenkov effect (about 220 KeV in tissue for electrons). This study presents the first examination of optical Čerenkov emission as a surrogate for the absorbed superficial dose for MV x-ray beams. Methods: In this study, Monte Carlo simulations of flat and curved surfaces were studied to analyze the energy spectra of charged particles produced in different regions near the surfaces when irradiated by MV x-ray beams. Čerenkov emission intensity and radiation dose were directly simulated in voxelized flat and cylindrical phantoms. The sampling region of superficial dosimetry based on Čerenkov radiation was simulated in layered skin models. Angular distributions of optical emission from the surfaces were investigated. Tissue mimicking phantoms with flat and curved surfaces were imaged with a time domain gating system. The beam field sizes (50 × 50–200 × 200 mm2), incident angles (0°–70°) and imaging regions were all varied. Results: The entrance or exit region of the tissue has nearly homogeneous energy spectra across the beam, such that their Čerenkov emission is proportional to dose. Directly simulated local intensity of Čerenkov and radiation dose in voxelized flat and cylindrical phantoms further validate that this signal is proportional to radiation dose with absolute average discrepancy within 2%, and the largest within 5% typically at the beam edges. The effective sampling depth could be tuned from near 0 up to 6 mm by spectral filtering. The angular profiles near the theoretical Lambertian emission distribution for a perfect diffusive medium, suggesting that angular correction of Čerenkov images may not be required even for curved surface. The acquisition speed and signal to noise ratio of the time domain gating system were investigated for different acquisition procedures, and the results show there is good potential for real-time superficial dose monitoring. Dose imaging under normal ambient room lighting was validated, using gated detection and a breast phantom. Conclusions: This study indicates that Čerenkov emission imaging might provide a valuable way to superficial dosimetry imaging in real time for external beam radiotherapy with megavoltage x-ray beams. PMID:24089916
NASA Astrophysics Data System (ADS)
Burkholder, E. F.
2016-12-01
One way to address challenges of replacing NAD 83, NGVD 88 and IGLD 85 is to exploit the characteristics of 3-D digital spatial data. This presentation describes the 3-D global spatial data model (GSDM) which accommodates rigorous scientific endeavors while simultaneously supporting a local flat-earth view of the world. The GSDM is based upon the assumption of a single origin for 3-D spatial data and uses rules of solid geometry for manipulating spatial data components. This approach exploits the characteristics of 3-D digital spatial data and preserves the quality of geodetic measurements while providing spatial data users the option of working with rectangular flat-earth components and computational procedures for local applications. This flexibility is provided by using a bidirectional rotation matrix that allows any 3-D vector to be used in a geodetic reference frame for high-end applications and/or the local frame for flat-earth users. The GSDM is viewed as compatible with the datum products being developed by NGS and provides for unambiguous exchange of 3-D spatial data between disciplines and users worldwide. Three geometrical models will be summarized - geodetic, map projection, and 3-D. Geodetic computations are performed on an ellipsoid and are without equal in providing rigorous coordinate values for latitude, longitude, and ellipsoid height. Members of the user community have, for generations, sought ways to "flatten the world" to accommodate a flat-earth view and to avoid the complexity of working on an ellipsoid. Map projections have been defined for a wide variety of applications and remain very useful for visualizing spatial data. But, the GSDM supports computations based on 3-D components that have not been distorted in a 2-D map projection. The GSDM does not invalidate either geodesy or cartographic computational processes but provides a geometrically correct view of any point cloud from any point selected by the user. As a bonus, the GSDM also defines spatial data accuracy and includes procedures for establishing, tracking and using spatial data accuracy - increasingly important in many applications but especially relevant given development of procedures for tracking drones (primarily absolute) and intelligent vehicles (primarily relative).
VizieR Online Data Catalog: Spectra of 7 Hα emission line stars in MBM 18 (Brand+ 2012)
NASA Astrophysics Data System (ADS)
Brand, J.; Wouterloot, J. G. A.; Magnani, L.
2012-10-01
Data in tabular form (wavelength and flux) are presented of the spectra of seven candidate Hα emission line stars in the direction of translucent cloud MBM 18. The data were obtained on 5 different nights in 2009 and 2010 with the 3.58-m Telescopio Nazionale Galileo (TNG; La Palma, Canary Islands, Spain). The spectra are shown in the appendix of the paper, only visible in the on line version. The spectra were taken with the low-resolution spectrograph DOLORES on the TNG, using long-slit spectroscopy. We used grism VHR-R, which covers a wavelength range of 6240-7720 Angstrom with a dispersion of 0.80Å/pix. The scale of the CCD detector is 0.252 arcsec/pixel. The observations were carried out with a slit width of 1 or 1.5 arcsec, depending on the seeing, resulting in a spectral resolution of 3.2Å and 4.8Å, respectively. To avoid problems with cosmic rays, 2 to 4 separate spectra per star were obtained. Two of the stars (Ha4 and Ha6) were observed simultaneously with another target (Ha1 and Ha5, respectively) by positioning the slit at an appropriate angle. The integration time was based on the brighter star in the slit, thus the signal-to-noise ratio for the other target is smaller than for the primary one. To allow absolute flux calibration the standard star Feige24 or Feige34 (for Ha5-Ha6) was observed immediately before or after the target observations, using the same instrumental setup as for the target observations. Flat-fielding was performed using 10 (5 for Ha5-Ha6) frames, which were uniformly illuminated by a halogen lamp. Wavelength calibration was performed using an arc-spectrum of an Ar, Ne+Hg, and Kr lamp, or a Ne+Hg (for Ha7) comparison lamp. A bias frame, to be subtracted from the other frames before analysis, was constructed from ten individual bias frames. Flat-, arc-, and bias-frames were obtained on the same day as the science observations and with the same instrumental setup. Data were reduced with the IRAF package. From all science frames a bias was subtracted, after which they were divided by the normalised flat field. From each of the science frames the trace(s) of the star(s) were extracted and these were wavelength-calibrated using one of the frames with the arc-spectrum. Each target was wavelength-calibrated with the arc-spectrum extracted at the same location on the detector, to compensate for small deviations that might occur in the alignment of the reference emission lines across the detector. The spectra were then corrected for extinction, and flux-calibrated using the standard star observations. The individual one-dimensional wavelength- and flux-calibrated spectra of each target were then averaged into a final spectrum. To further correct the wavelength calibration, we used the sky lines that were subtracted from the stellar spectra. For each spectrum, Gaussian fits were made to tens of sky lines, and their wavelengths were compared to those listed in Osterbrock et al. (1996PASP..108..277O, Cat. III/211. Three stars were found to need a small correction: Ha2 (-1.5Å) and Ha5 and 6 (both -2.2Å); these corrections have been applied in the tables. For the other four stars the difference was negligible, although for the sky lines in Ha1 and Ha4 (which were observed in the same slit) the deviation between measured and literature wavelengths varied slightly, but systematically, with wavelengths between 6250Å and 7600Å, while at longer wavelengths the deviations became rapidly larger (up to several Angstroms). (8 data files).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haro, Jaume; Amorós, Jaume, E-mail: jaime.haro@upc.edu, E-mail: jaume.amoros@upc.edu
2014-12-01
We consider the matter bounce scenario in F(T) gravity and Loop Quantum Cosmology (LQC) for phenomenological potentials that at early times provide a nearly matter dominated Universe in the contracting phase, having a reheating mechanism in the expanding or contracting phase, i.e., being able to release the energy of the scalar field creating particles that thermalize in order to match with the hot Friedmann Universe, and finally at late times leading to the current cosmic acceleration. For these potentials, numerically solving the dynamical perturbation equations we have seen that, for the particular F(T) model that we will name teleparallel versionmore » of LQC, and whose modified Friedmann equation coincides with the corresponding one in holonomy corrected LQC when one deals with the flat Friedmann-Lemaître-Robertson-Walker (FLRW) geometry, the corresponding equations obtained from the well-know perturbed equations in F(T) gravity lead to theoretical results that fit well with current observational data. More precisely, in this teleparallel version of LQC there is a set of solutions which leads to theoretical results that match correctly with last BICEP2 data, and there is another set whose theoretical results fit well with Planck's experimental data. On the other hand, in the standard holonomy corrected LQC, using the perturbed equations obtained replacing the Ashtekar connection by a suitable sinus function and inserting some counter-terms in order to preserve the algebra of constrains, the theoretical value of the tensor/scalar ratio is smaller than in the teleparallel version, which means that there is always a set of solutions that matches with Planck's data, but for some potentials BICEP2 experimental results disfavours holonomy corrected LQC.« less
MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario
Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less
A method for obtaining distributed surface flux measurements in complex terrain
NASA Astrophysics Data System (ADS)
Daniels, M. H.; Pardyjak, E.; Nadeau, D. F.; Barrenetxea, G.; Brutsaert, W. H.; Parlange, M. B.
2011-12-01
Sonic anemometers and gas analyzers can be used to measure fluxes of momentum, heat, and moisture over flat terrain, and with the proper corrections, over sloping terrain as well. While this method of obtaining fluxes is currently the most accurate available, the instruments themselves are costly, making installation of many stations impossible for most campaign budgets. Small, commercial automatic weather stations (Sensorscope) are available at a fraction of the cost of sonic anemometers or gas analyzers. Sensorscope stations use slow-response instruments to measure standard meteorological variables, including wind speed and direction, air temperature, humidity, surface skin temperature, and incoming solar radiation. The method presented here makes use of one sonic anemometer and one gas analyzer along with a dozen Sensorscope stations installed throughout the Val Ferret catchment in southern Switzerland in the summers of 2009, 2010 and 2011. Daytime fluxes are calculated using Monin-Obukhov similarity theory in conjunction with the surface energy balance at each Sensorscope station as well as at the location of the sonic anemometer and gas analyzer, where a suite of additional slow-response instruments were co-located. Corrections related to slope angle were made for wind speeds and incoming shortwave radiation measured by the horizontally-mounted cup anemometers and incoming solar radiation sensors respectively. A temperature correction was also applied to account for daytime heating inside the radiation shield on the slow-response temperature/humidity sensors. With these corrections, we find a correlation coefficient of 0.77 between u* derived using Monin-Obukhov similarity theory and that of the sonic anemometer. Calculated versus measured heat fluxes also compare well and local patterns of latent heat flux and measured surface soil moisture are correlated.
Non-linear power spectra in the synchronous gauge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Jai-chan; Noh, Hyerim; Jeong, Donghui
2015-05-01
We study the non-linear corrections to the matter and velocity power spectra in the synchronous gauge (SG). For the leading correction to the non-linear power spectra, we consider the perturbations up to third order in a zero-pressure fluid in a flat cosmological background. Although the equations in the SG happen to coincide with those in the comoving gauge (CG) to linear order, they differ from second order. In particular, the second order hydrodynamic equations in the SG are apparently in the Lagrangian form, whereas those in the CG are in the Eulerian form. The non-linear power spectra naively presented inmore » the original SG show rather pathological behavior quite different from the result of the Newtonian theory even on sub-horizon scales. We show that the pathology in the nonlinear power spectra is due to the absence of the convective terms in, thus the Lagrangian nature of, the SG. We show that there are many different ways of introducing the corrective convective terms in the SG equations. However, the convective terms (Eulerian modification) can be introduced only through gauge transformations to other gauges which should be the same as the CG to the second order. In our previous works we have shown that the density and velocity perturbation equations in the CG exactly coincide with the Newtonian equations to the second order, and the pure general relativistic correction terms starting to appear from the third order are substantially suppressed compared with the relativistic/Newtonian terms in the power spectra. As a result, we conclude that the SG per se is an inappropriate coordinate choice in handling the non-linear matter and velocity power spectra of the large-scale structure where observations meet with theories.« less
49 CFR 173.175 - Permeation devices.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., flat and horizontal surface from a height of 1.8 m (5.9 feet): (i) One drop flat on the bottom; (ii) One drop flat on the top; (iii) One drop flat on the long side; (iv) One drop flat on the short side... stacked to a height of 3 m (10 feet) (including the test sample). (3) Each of the above tests may be...
49 CFR 178.609 - Test requirements for packagings for infectious substances.
Code of Federal Regulations, 2011 CFR
2011-10-01
... free-fall drops onto a rigid, nonresilient, flat, horizontal surface from a height of 9 m (30 feet... must be dropped, one in each of the following orientation: (i) Flat on the base; (ii) Flat on the top; (iii) Flat on the longest side; (iv) Flat on the shortest side; and (v) On a corner. (2) Where the...
49 CFR 173.175 - Permeation devices.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., flat and horizontal surface from a height of 1.8 m (5.9 feet): (i) One drop flat on the bottom; (ii) One drop flat on the top; (iii) One drop flat on the long side; (iv) One drop flat on the short side; (v) One drop on a corner at the junction of three intersecting edges; and (2) A force applied to the...
49 CFR 173.175 - Permeation devices.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., flat and horizontal surface from a height of 1.8 m (5.9 feet): (i) One drop flat on the bottom; (ii) One drop flat on the top; (iii) One drop flat on the long side; (iv) One drop flat on the short side; (v) One drop on a corner at the junction of three intersecting edges; and (2) A force applied to the...
NASA Astrophysics Data System (ADS)
Tsuchiya, Yuichiro; Kodera, Yoshie
2006-03-01
In the picture archiving and communication system (PACS) environment, it is important that all images be stored in the correct location. However, if information such as the patient's name or identification number has been entered incorrectly, it is difficult to notice the error. The present study was performed to develop a system of patient collation automatically for dynamic radiogram examination by a kinetic analysis, and to evaluate the performance of the system. Dynamic chest radiographs during respiration were obtained by using a modified flat panel detector system. Our computer algorithm developed in this study was consisted of two main procedures, kinetic map imaging processing, and collation processing. Kinetic map processing is a new algorithm to visualize a movement for dynamic radiography; direction classification of optical flows and intensity-density transformation technique was performed. Collation processing consisted of analysis with an artificial neural network (ANN) and discrimination for Mahalanobis' generalized distance, those procedures were performed to evaluate a similarity of combination for the same person. Finally, we investigated the performance of our system using eight healthy volunteers' radiographs. The performance was shown as a sensitivity and specificity. The sensitivity and specificity for our system were shown 100% and 100%, respectively. This result indicated that our system has excellent performance for recognition of a patient. Our system will be useful in PACS management for dynamic chest radiography.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
NASA Astrophysics Data System (ADS)
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
Geodetic Constraints on Fault Slip Rates and Seismic Hazard in the Greater Las Vegas Area
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Kreemer, C.; Blewitt, G.; Broermann, J.; Bennett, R. A.
2014-12-01
We address fundamental questions about how contemporary tectonic deformation of the crust in the southern Great Basin occurs in the region around Las Vegas (LV) Nevada, western Arizona and eastern California. This area lies in the intersection of the eastern Walker Lane Belt, southern Great Basin and western Colorado Plateau (CP), sharing features of transtensional and extensional deformation associated with Pacific/North America relative motion. We use GPS data collected from 48 stations of the MAGNET semi-continuous network and 77 stations from continuous networks including BARGEN and EarthScope Plate Boundary Observatory. MAGNET stations have been observed for a minimum of 7 years, while most continuous stations have longer records. From these data we estimate the velocity of crustal motion for all stations with respect to the stable North America reference frame NA12. To correct for transients from recent large earthquakes including the 1999 Hector Mine and 2010 El Mayor-Cucapah events we use models of co- and post-seismic deformation, subtracting the predicted motions from the time series before estimating interseismic stain rates. We find approximately 2 mm/yr of relative motion distributed over 200 km centered on Las Vegas, with a mean strain accumulation rate of 10 × 10-9 yr-1, with lower rates of predominantly extensional strain to the east and higher rates of predominantly shear deformation to the west. The mean strain rate is lower than that of the western Walker Lane but about twice that of eastern Nevada where e.g., the Wells, NV MW 6.0 earthquake occurred in 2008. From this new velocity field we generated a horizontal tensor strain rate map and a crustal block motion model to portray the transition of active strain from the CP into the Walker Lane. For faults in the Las Vegas Valley, including the Eglington Fault and Frenchman Mountain Fault, the observed velocity gradients and model results are consistent with normal slip rates of 0.2 mm/yr, which are typical for the region. The Stateline Fault system experiences dextral slip of at least 0.4 mm/yr while normal faults south of LV collectively accommodate 0.9 mm/yr of east-west extension across a zone ~150 km wide. We see no evidence for concentrations of deformation or isolated rigid microplates within this zone.
NASA Astrophysics Data System (ADS)
Nutku, Y.; Sheftel, M. B.
2014-02-01
This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.
An eigenvalue approach to quantum plasmonics based on a self-consistent hydrodynamics method
NASA Astrophysics Data System (ADS)
Ding, Kun; Chan, C. T.
2018-02-01
Plasmonics has attracted much attention not only because it has useful properties such as strong field enhancement, but also because it reveals the quantum nature of matter. To handle quantum plasmonics effects, ab initio packages or empirical Feibelman d-parameters have been used to explore the quantum correction of plasmonic resonances. However, most of these methods are formulated within the quasi-static framework. The self-consistent hydrodynamics model offers a reliable approach to study quantum plasmonics because it can incorporate the quantum effect of the electron gas into classical electrodynamics in a consistent manner. Instead of the standard scattering method, we formulate the self-consistent hydrodynamics method as an eigenvalue problem to study quantum plasmonics with electrons and photons treated on the same footing. We find that the eigenvalue approach must involve a global operator, which originates from the energy functional of the electron gas. This manifests the intrinsic nonlocality of the response of quantum plasmonic resonances. Our model gives the analytical forms of quantum corrections to plasmonic modes, incorporating quantum electron spill-out effects and electrodynamical retardation. We apply our method to study the quantum surface plasmon polariton for a single flat interface.
Asymptotic theory of two-dimensional trailing-edge flows
NASA Technical Reports Server (NTRS)
Melnik, R. E.; Chow, R.
1975-01-01
Problems of laminar and turbulent viscous interaction near trailing edges of streamlined bodies are considered. Asymptotic expansions of the Navier-Stokes equations in the limit of large Reynolds numbers are used to describe the local solution near the trailing edge of cusped or nearly cusped airfoils at small angles of attack in compressible flow. A complicated inverse iterative procedure, involving finite-difference solutions of the triple-deck equations coupled with asymptotic solutions of the boundary values, is used to accurately solve the viscous interaction problem. Results are given for the correction to the boundary-layer solution for drag of a finite flat plate at zero angle of attack and for the viscous correction to the lift of an airfoil at incidence. A rational asymptotic theory is developed for treating turbulent interactions near trailing edges and is shown to lead to a multilayer structure of turbulent boundary layers. The flow over most of the boundary layer is described by a Lighthill model of inviscid rotational flow. The main features of the model are discussed and a sample solution for the skin friction is obtained and compared with the data of Schubauer and Klebanoff for a turbulent flow in a moderately large adverse pressure gradient.
Direct measurements of local bed shear stress in the presence of pressure gradients
NASA Astrophysics Data System (ADS)
Pujara, Nimish; Liu, Philip L.-F.
2014-07-01
This paper describes the development of a shear plate sensor capable of directly measuring the local mean bed shear stress in small-scale and large-scale laboratory flumes. The sensor is capable of measuring bed shear stress in the range 200 Pa with an accuracy up to 1 %. Its size, 43 mm in the flow direction, is designed to be small enough to give spatially local measurements, and its bandwidth, 75 Hz, is high enough to resolve time-varying forcing. Typically, shear plate sensors are restricted to use in zero pressure gradient flows because secondary forces on the edge of the shear plate caused by pressure gradients can introduce large errors. However, by analysis of the pressure distribution at the edges of the shear plate in mild pressure gradients, we introduce a new methodology for correcting for the pressure gradient force. The developed sensor includes pressure tappings to measure the pressure gradient in the flow, and the methodology for correction is applied to obtain accurate measurements of bed shear stress under solitary waves in a small-scale wave flume. The sensor is also validated by measurements in a turbulent flat plate boundary layer in open channel flow.
NASA Technical Reports Server (NTRS)
Neigh, Christopher; McCorkel, Joel; Campbell, Petya; Ong, Laurence; Ly, Vuong; Landis, David; Fry, Stuart; Middleton, Elizabeth
2016-01-01
Spaceborne spectrometers require spectral-temporal stability characterization to aid validation of derived data products. EO-1 began orbital precession in 2011 after exhausting onboard fuel resources. In the Libya-4 Pseudo Invariant Calibration Site (PICS) this resulted in a progressive shift from a mean local equatorial crossing time of approx. 10:00 AM in 2011 to approx. 8:30 AM in late 2015. Here, we studied precession impacts to Hyperion surface reflectance products using three atmospheric correction approaches from 2004 to 2015. Combined difference estimates of surface reflectance were < 5% in the visible near infrared (VNIR) and < 10% for most of the shortwave infrared (SWIR). Combined coefficient of variation (CV) estimates in the VNIR ranged from 0.025 ? 0.095, and in the SWIR ranged from 0.025 ? 0.06, excluding bands near atmospheric absorption features. Reflectances produced with different atmospheric models were correlated (R2) in VNIR from 0.25 ? 0.94 and SWIR from 0.12 ? 0.88 (p < 0.01). The uncertainties in all models increased with terrain slope up to 15deg and selecting dune flats could reduce errors. We conclude that these data remain a useful resource over this period.
NASA Astrophysics Data System (ADS)
Bristow, Quentin
1990-03-01
The occurrence rates of pulse strings, or sequences of pulses with interarrival times less than the resolving time of the pulse-height analysis system used to acquire spectra, are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern analog to digital converters, were used to generate pileup spectra due to coincidences between two pulses (first order pileup) and three pulses (second order pileup) for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum. The use of a flat spectrum showed the first order pileup distorted the spectrum to a linear ramp with a pileup tail. A correction algorithm was successfully applied to correct entire spectra (simulated and real) for first and second order pileups.
Surface Control of Cold Hibernated Elastic Memory Self-Deployable Structure
NASA Technical Reports Server (NTRS)
Sokolowski, Witold M.; Ghaffarian, Reza
2006-01-01
A new class of simple, reliable, lightweight, low packaging volume and cost, self-deployable structures has been developed for use in space and commercial applications. This technology called 'cold hibernated elastic memory' (CHEM) utilizes shape memory polymers (SMP)in open cellular (foam) structure or sandwich structures made of shape memory polymer foam cores and polymeric composite skins. Some of many potential CHEM space applications require a high precision deployment and surface accuracy during operation. However, a CHEM structure could be slightly distorted by the thermo-mechanical processing as well as by thermal space environment Therefore, the sensor system is desirable to monitor and correct the potential surface imperfection. During these studies, the surface control of CHEM smart structures was demonstrated using a Macro-Fiber Composite (MFC) actuator developed by the NASA LaRC and US Army ARL. The test results indicate that the MFC actuator performed well before and after processing cycles. It reduced some residue compressive strain that in turn corrected very small shape distortion after each processing cycle. The integrated precision strain gages were detecting only a small flat shape imperfection indicating a good recoverability of original shape of the CHEM test structure.
NASA MUST Paper: Infrared Thermography of Graphite/Epoxy
NASA Technical Reports Server (NTRS)
Comeaux, Kayla; Koshti, Ajay
2010-01-01
The focus of this project is to use Infrared Thermography, a non-destructive test, to detect detrimental cracks and voids beneath the surface of materials used in the space program. This project will consist of developing a simulation model of the Infrared Thermography inspection of the Graphite/Epoxy specimen. The simulation entails finding the correct physical properties for this specimen as well as programming the model for thick voids or flat bottom holes. After the simulation is completed, an Infrared Thermography inspection of the actual specimen will be made. Upon acquiring the experimental test data, an analysis of the data for the actual experiment will occur, which includes analyzing images, graphical analysis, and analyzing numerical data received from the infrared camera. The simulation will then be corrected for any discrepancies between it and the actual experiment. The optimized simulation material property inputs can then be used for new simulation for thin voids. The comparison of the two simulations, the simulation for the thick void and the simulation for the thin void, provides a correlation between the peak contrast ratio and peak time ratio. This correlation is used in the evaluation of flash thermography data during the evaluation of delaminations.
The Snapshot A-Star SurveY (SASSY)
NASA Astrophysics Data System (ADS)
Garani, Jasmine; Nielsen, Eric L.; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Wang, Jason; Esposito, Thomas; Best, William M. J.; Bowler, Brendan P.; Dupuy, Trent J.; Ruffio, Jean-Baptise
2017-01-01
We present the Snapshot A-Star SurveY (SASSY), an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminious planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We describe a custom data-reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present early results from the survey including planet and brown dwarf candidates and the status of ongoing follow-up observations. Utilizing the high contrast of Keck NIRC2 coronagraphic observations, SASSY reaches sensitivity to brown dwarfs and planetary mass companions at separations between 0.6'' and 4''. With over 200 stars observed we are tripling the number of high-mass stars imaged at these contrasts and sensitivities compared to previous surveys. This work was supported by the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.
Carr, W.J.; Byers, F.M.; Orkild, Paul P.
1984-01-01
The Crater Flat Tuff is herein revised to include a newly recognized lowest unit, the Tram Member, exposed at scattered localities in the southwest Nevada Test Site region, and in several drill holes in the Yucca Mountain area. The overlying Bullfrog and Prow Pass Members are well exposed at the type locality of the formation near the southeast edge of Crater Flat, just north of U.S. Highway 95. In previous work, the Tram Member was thought to be the Bullfrog Member, and therefore was shown as Bullfrog or as undifferentiated Crater Flat Tuff on published maps. The revised Crater Flat Tuff is stratigraphically below the Topopah Spring Member of the Paintbrush Tuff and above the Grouse Canyon Member of the Belted Range Tuff, and is approximately 13.6 m.y. old. Drill holes on Yucca Mountain and near Fortymile Wash penetrate all three members of the Crater Flat as well as an underlying quartz-poor unit, which is herein defined as the Lithic Ridge Tuff from exposures on Lithic Ridge near the head of Topopah Wash. In outcrops between Calico Hills and Yucca Flat, the Lithic Ridge Tuff overlies a Bullfrog-like unit of reverse magnetic polarity that probably correlates with a widespread unit around and under Yucca Flat, referred to previously as Crater Flat Tuff. This unit is here informally designated as the tuff of Yucca Flat. Although older, it may be genetically related to the Crater Flat Tuff. Although the rocks are poorly exposed, geophysical and geologic evidence to date suggests that (1) the source of the Crater Flat Tuff is a caldera complex in the Crater Flat area between Yucca Mountain and Bare Mountain, and (2) there are at least two cauldrons within this complex--one probably associated with eruption of the Tram, the other with the Bullfrog and Prow Pass Members. The complex is named the Crater Flat-Prospector Pass caldera complex. The northern part of the Yucca Mountain area is suggested as the general location of the source of pre-Crater Flat tuffs, but a caldera related to the Lithic Ridge Tuff has not been specifically identified.
NASA Technical Reports Server (NTRS)
Carrere, Veronique; Abrams, Michael J.
1988-01-01
Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data were acquired over the Goldfield Mining District, Nevada, in September 1987. Goldfield is one of the group of large epithermal precious metal deposits in Tertiary volcanic rocks, associated with silicic volcanism and caldera formation. Hydrothermal alteration consists of silicification along fractures, advanced agrillic and argillic zones further away from veins and more widespread propylitic zones. An evaluation of AVIRIS data quality was performed. Faults in the data, related to engineering problems and a different behavior of the instrument while on-board the U2, were encountered. Consequently, a decision was made to use raw data and correct them only for dark current variations and detector read-out-delays. New software was written to that effect. Atmospheric correction was performed using the flat field correction technique. Analysis of the data was then performed to extract spectral information, mainly concentrating on the 2 to 2.45 micron window, as the alteration minerals of interest have their distinctive spectral reflectance features in this region. Principally kaolinite and alunite spectra were clearly obtained. Mapping of the different minerals and alteration zones was attempted using ratios and clustering techniques. Poor signal-to-noise performance of the instrument and the lack of appropriate software prevented the production of an alteration map of the area. Spectra extracted locally from the AVIRIS data were checked in the field by collecting representative samples of the outcrops.
Stiffness of frictional contact of dissimilar elastic solids
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; ...
2017-12-22
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less
Stiffness of frictional contact of dissimilar elastic solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less
Stiffness of frictional contact of dissimilar elastic solids
NASA Astrophysics Data System (ADS)
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; Xu, Haitao; Pharr, George M.
2018-03-01
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This paper gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the friction coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations - adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. The correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.
Kanatlı, Ulunay; Aktas, Erdem; Yetkin, Haluk
2016-09-01
Flexible flatfoot, as the most prevalent foot deformity in pediatric population still has no standardized strategy for its management hence some orthopedic surgeons have the tendency to use orthotic devices. The objective of this study is to evaluate whether orthotic shoes effect the natural course of the developing medial longitudinal arch in children diagnosed with moderate flexible flatfoot. Fourty-five children (33 boys and 12 girls) with moderate flexible flatfoot were enrolled in this study. They were followed up for 34.6 ± 10.9 months (24-57 months). Patients in group 1 were treated with corrective shoes whereas group 2 was left untreated. Patients were evaluated according to; general joint laxity, arch index, lateral talo-first metatarsal (TM), talo-horizontal (TH), calcaneal pitch (CP), lateral and anterior talocalcaneal (TC) angles. Although there was a significant decrease in general laxity in both groups, decrease of laxity percentage was not significant between groups (p = 0.812). TM, TH and anterior TC angles were found to be decreased in groups whereas there was no difference between group 1 and 2. The arch index was found to be correlated with TM and TH angles in both groups (p = 0.004, p = 0.013). Corrective shoes for flexible flatfoot was found not effective on development of foot arches. Therefore, they should be limited only for selected cases. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Goldring, Nicholas
The impending Advanced Photon Source Upgrade (APS-U) will introduce a hard x-ray source that is set to surpass the current APS in brightness and coherence by two to three orders of magnitude. To achieve this, the storage ring light source will be equipped with a multi-bend achromat (MBA) lattice. In order to fully exploit and preserve the integrity of new beams actualized by upgraded storage ring components, improved beamline optics must also be introduced. The design process of new optics for the APS-U and other fourth generation synchrotrons involves the challenge of accommodating unprecedented heat loads. This dissertation presents an ex-situ analysis of heat load deformation and the subsequent mechanical bending correction of a 400 mm long, grazing-incidence, H2O side-cooled, reflecting mirror subjected to x-ray beams produced by the APS-U undulator source. Bending correction is measured as the smallest rms slope error, sigmarms, that can be resolved over a given length of the heat deformed geometry due to mechanical bending. Values of sigmarms in the <0.1 microrad regime represent a given mirror length over which incident x-ray beams from modern sources can be reflected without significant loss of quality. This study assumes a perfectly flat mirror surface and does not account for finish errors or other contributions to sigmarms beyond the scope of thermal deformation and elastic bending. The methodology of this research includes finite element analysis (FEA) employed conjointly with an analytical solution for mechanical bending deflection by means of an end couple. Additionally, the study will focus on two beam power density profiles predicted by the APS-U which were created using the software SRCalc. The profiles account for a 6 GeV electron beam with second moment widths of 0.058 and 0.011 mm in the x- and y- directions respectively; the electron beam is passed through a 4.8 m long, 28 mm period APS-U undulator which produces the x-ray beam incident at a 3 mrad grazing angle on the flat mirror surface for both cases. The first power density profile is the most extreme case created by the undulator at it's closest gap with a critical energy of 3 keV (k y=2.459); the second profile is generated for the case in which the undulator is tuned to emit at 8 keV (ky=1.026). The 3 keV case is of particular interest as it represents one of the most intense peak heat loads predicted to be incident on first optics at the APS-U. The FEA results revealed that the deflection due to the 3 keV heat load yields a 10.9 microrad rms slope error over the full mirror length. The projected correction via the elastic bending of the substrate yields a 0.10 microrad sigma rms within the center longitudinal 300 mm. The FEA also predicts that the 8 keV heat load deflection can be corrected to a sigma rms of 0.11 microrad within the center 300 mm from 1.50 microrad over the entire length. Attempts to optimize the end couple to correct over the entire 400 mm mirror length were unable to resolve the heat load deflection rms slope error to within a <0.1 microrad value for either case. However, if a larger corrected surface is required, a longer mirror can be implemented so as to absorb the heat load of a larger beam than necessary which can then be cut by an aperture to the desired size and energy range.
A new approach for modeling gravitational radiation from the inspiral of two neutron stars
NASA Astrophysics Data System (ADS)
Luke, Stephen A.
In this dissertation, a new method of applying the ADM formalism of general relativity to model the gravitational radiation emitted from the realistic inspiral of a neutron star binary is described. A description of the conformally flat condition (CFC) is summarized, and the ADM equations are solved by use of the CFC approach for a neutron star binary. The advantages and limitations of this approach are discussed, and the need for a more accurate improvement to this approach is described. To address this need, a linearized perturbation of the CFC spatial three metric is then introduced. The general relativistic hydrodynamic equations are then allowed to evolve against this basis under the assumption that the first-order corrections to the hydrodynamic variables are negligible compared to their CFC values. As a first approximation, the linear corrections to the conformal factor, lapse function, and shift vector are also assumed to be small compared to the extrinsic curvature and the three metric. A boundary matching method is then introduced as a way of computing the gravitational radiation of this relativistic system without use of the multipole expansion as employed by earlier applications of the CFC approach. It is assumed that at a location far from the source, the three metric is accurately described by a linear correction to Minkowski spacetime. The two polarizations of gravitational radiation can then be computed at that point in terms of the linearized correction to the metric. The evolution equations obtained from the linearized perturbative correction to the CFC approach and the method for recovery of the gravity wave signal are then tested by use of a three-dimensional numerical simulation. This code is used to compute the gravity wave signal emitted a pair of equal mass neutron stars in quasi-stable circular orbits at a point early in their inspiral phase. From this simple numerical analysis, the correct general trend of gravitational radiation is recovered. Comparisons with (5/2) post-Newtonian solutions show a similar gravitational waveform, although inaccuracies are still found to exist from this computation. Finally, several areas for improvement and potential future applications of this technique are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elzibak, A; Loblaw, A; Morton, G
Purpose: To investigate the usefulness of metal artifact reduction in CT images of patients with bilateral hip prostheses (BHP) for contouring the prostate and determine if the inclusion of MR images provides additional benefits. Methods: Five patients with BHP were CT scanned using our clinical protocol (140kV, 300mAs, 3mm slices, 1.5mm increment, Philips Medical Systems, OH). Images were reconstructed with the orthopaedic metal artifact reduction (O-MAR) algorithm. MRI scanning was then performed (1.5T, GE Healthcare, WI) with a flat table-top (T{sub 2}-weighted, inherent body coil, FRFSE, 3mm slices with 0mm gap). All images were transferred to Pinnacle (Version 9.2, Philipsmore » Medical Systems). For each patient, two data sets were produced: one containing the O-MAR-corrected CT images and another containing fused MRI and O-MAR-corrected CT images. Four genito-urinary radiation oncologists contoured the prostate of each patient on the O-MAR-corrected CT data. Two weeks later, they contoured the prostate on the fused data set, blinded to all other contours. During each contouring session, the oncologists reported their confidence in the contours (1=very confident, 3=not confident) and the contouring difficulty that they experienced (1=really easy, 4=very challenging). Prostate volumes were computed from the contours and the conformity index was used to evaluate inter-observer variability. Results: Larger prostate volumes were found on the O-MAR-corrected CT set than on the fused set (p< 0.05, median=36.9cm{sup 3} vs. 26.63 cm{sup 3}). No significant differences were noted in the inter-observer variability between the two data sets (p=0.3). Contouring difficulty decreased with the addition of MRI (p<0.05) while the radiation oncologists reported more confidence in their contours when MRI was fused with the O-MAR-corrected CT data (p<0.05). Conclusion: This preliminary work demonstrated that, while O-MAR correction to CT images improves visualization of anatomy, the addition of MRI enhanced the oncologists’ confidence in contouring the prostate in patients with BHP.« less
Computer programs to predict induced effects of jets exhausting into a crossflow
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Mendenhall, M. R.
1984-01-01
A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.
Intraocular lens design for treating high myopia based on individual eye model
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Zhaoqi; Wang, Yan; Zuo, Tong
2007-02-01
In this research, we firstly design the phakic intraocular lens (PIOL) based on individual eye model with optical design software ZEMAX. The individual PIOL is designed to correct the defocus and astigmatism, and then we compare the PIOL power calculated from the individual eye model with that from the experiential formula. Close values of PIOL power are obtained between the individual eye model and the formula, but the suggested method has more accuracy with more functions. The impact of PIOL decentration on human eye is evaluated, including rotation decentration, flat axis decentration, steep axis decentration and axial movement of PIOL, which is impossible with traditional method. To control the PIOL decentration errors, we give the limit values of PIOL decentration for the specific eye in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brundage, Aaron L.; Nicolette, Vernon F.; Donaldson, A. Burl
2005-09-01
A joint experimental and computational study was performed to evaluate the capability of the Sandia Fire Code VULCAN to predict thermocouple response temperature. Thermocouple temperatures recorded by an Inconel-sheathed thermocouple inserted into a near-adiabatic flat flame were predicted by companion VULCAN simulations. The predicted thermocouple temperatures were within 6% of the measured values, with the error primarily attributable to uncertainty in Inconel 600 emissivity and axial conduction losses along the length of the thermocouple assembly. Hence, it is recommended that future thermocouple models (for Inconel-sheathed designs) include a correction for axial conduction. Given the remarkable agreement between experiment and simulation,more » it is recommended that the analysis be repeated for thermocouples in flames with pollutants such as soot.« less
A 360-degree floating 3D display based on light field regeneration.
Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong
2013-05-06
Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.
Nonsingular solutions and instabilities in Einstein-scalar-Gauss-Bonnet cosmology
NASA Astrophysics Data System (ADS)
Sberna, Laura; Pani, Paolo
2017-12-01
It is generically believed that higher-order curvature corrections to the Einstein-Hilbert action might cure the curvature singularities that plague general relativity. Here we consider Einstein-scalar-Gauss-Bonnet gravity, the only four-dimensional, ghost-free theory with quadratic curvature terms. For any choice of the coupling function and of the scalar potential, we show that the theory does not allow for bouncing solutions in the flat and open Friedmann universe. For the case of a closed universe, using a reverse-engineering method, we explicitly provide a bouncing solution which is nevertheless linearly unstable in the scalar gravitational sector. Moreover, we show that the expanding, singularity-free, early-time cosmologies allowed in the theory are unstable. These results rely only on analyticity and finiteness of cosmological variables at early times.
Correlation function of the luminosity distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch
We present the correlation function of the luminosity distances in a flat ΛCDM universe. Decomposing the luminosity distance fluctuation into the velocity, the gravitational potential, and the lensing contributions in linear perturbation theory, we study their individual contributions to the correlation function. The lensing contribution is important at large redshift ( z ∼> 0.5) but only for small angular separation (θ ∼< 3°), while the velocity contribution dominates over the other contributions at low redshift or at larger separation. However, the gravitational potential contribution is always subdominant at all scale, if the correct gauge-invariant expression is used. The correlation functionmore » of the luminosity distances depends significantly on the matter content, especially for the lensing contribution, thus providing a novel tool of estimating cosmological parameters.« less