DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-04-01
The 2009 and 2012 International Energy Conservation Codes (IECC) yield positive benefits for Minnesota homeowners. Moving to either the 2009 or 2012 IECC from the current Minnesota Residential Energy Code is cost effective over a 30-year life cycle. On average, Minnesota homeowners will save $1,277 over 30 years under the 2009 IECC, with savings still higher at $9,873 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceed cumulative cash outlays) in 3 years for the 2009 IECC and 1 year for themore » 2012 IECC. Average annual energy savings are $122 for the 2009 IECC and $669 for the 2012 IECC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-07-03
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Michigan homeowners. Moving to the 2012 IECC from the Michigan Uniform Energy Code is cost-effective over a 30-year life cycle. On average, Michigan homeowners will save $10,081 with the 2012 IECC. Each year, the reduction to energy bills will significantly exceed increased mortgage costs. After accounting for up-front costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $604 for the 2012 IECC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-04-01
The 2009 and 2012 International Energy Conservation Codes (IECC) yield positive benefits for Wisconsin homeowners. Moving to either the 2009 or 2012 IECC from the current Wisconsin state code is cost effective over a 30-year life cycle. On average, Wisconsin homeowners will save $2,484 over 30 years under the 2009 IECC, with savings still higher at $10,733 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for both the 2009 and 2012 IECC. Average annual energymore » savings are $149 for the 2009 IECC and $672 for the 2012 IECC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-06-15
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Iowa homeowners. Moving to the 2012 IECC from the 2009 IECC is cost effective over a 30-year life cycle. On average, Iowa homeowners will save $7,573 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $454 for the 2012 IECC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-06-15
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Texas homeowners. Moving to the 2012 IECC from the 2009 IECC is cost effective over a 30-year life cycle. On average, Texas homeowners will save $3,456 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 2 years for the 2012 IECC. Average annual energy savings are $259 for the 2012 IECC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-07-03
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Ohio homeowners. Moving to the 2012 IECC from the 2009 IECC is cost-effective over a 30-year life cycle. On average, Ohio homeowners will save $5,151 with the 2012 IECC. Each year, the reduction to energy bills will significantly exceed increased mortgage costs. After accounting for up-front costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $330 for the 2012 IECC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-06-15
The 2009 and 2012 International Energy Conservation Codes (IECC) yield positive benefits for Alabama homeowners. Moving to either the 2009 or 2012 IECC from the 2006 IECC is cost effective over a 30-year life cycle. On average, Alabama homeowners will save $2,117 over 30 years under the 2009 IECC, with savings still higher at $6,182 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 2 years for both the 2009 and 2012 IECC. Average annual energy savings aremore » $168 for the 2009 IECC and $462 for the 2012 IECC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-04-01
The 2009 and 2012 International Energy Conservation Codes (IECC) yield positive benefits for Arizona homeowners. Moving to either the 2009 or 2012 IECC from the 2006 IECC is cost-effective over a 30-year life cycle. On average, Arizona homeowners will save $3,245 over 30 years under the 2009 IECC, with savings still higher at $6,550 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2009 and 2 years with the 2012 IECC. Average annual energymore » savings are $231 for the 2009 IECC and $486 for the 2012 IECC.« less
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Idaho
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Idaho. Moving to the 2015 IECC from the 2015 Idaho State Code base code is cost-effective for residential buildings in all climate zones in Idaho.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Montana. Moving to the 2015 IECC from the 2014 Montana State Code base code is cost-effective for residential buildings in all climate zones in Montana.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Iowa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Iowa. Moving to the 2015 IECC from the 2014 Iowa State Code base code is cost-effective for residential buildings in all climate zones in Iowa.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Utah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Utah. Moving to the 2015 IECC from the 2012 Utah State Code base code is cost-effective for residential buildings in all climate zones in Utah.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for New Hampshire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in New Hampshire. Moving to the 2015 IECC from the 2010 New Hampshire State Code base code is cost-effective for residential buildings in all climate zones in New Hampshire.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for North Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in North Carolina. Moving to the 2015 IECC from the 2012 North Carolina State Code base code is cost-effective for residential buildings in all climate zones in North Carolina.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in the District of Columbia. Moving to the 2015 IECC from the 2013 Washington DC Code base code is cost-effective for residential buildings in all climate zones in the District of Columbia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Mendon, Vrushali V.; Goel, Supriya
2012-06-01
The 2009 and 2012 International Energy Conservation Codes (IECC) require a substantial improvement in energy efficiency compared to the 2006 IECC. This report averages the energy use savings for a typical new residential dwelling unit based on the 2009 and 2012 IECC compared to the 2006 IECC. Results are reported by the eight climate zones in the IECC and for the national average.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Arkansas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Arkansas. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Arkansas.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Texas. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Texas.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Minnesota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Minnesota. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Minnesota.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Indiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Indiana. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Indiana.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Oklahoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Oklahoma. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Oklahoma.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Florida. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Florida.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Maine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Maine. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Maine.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Vermont. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Vermont.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Louisiana. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Louisiana.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Alabama. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Alabama.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Colorado
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Colorado. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Colorado.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Michigan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Michigan. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Michigan.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Maryland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Maryland. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Maryland.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Pennsylvania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Pennsylvania. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Pennsylvania.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Massachusetts. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Massachusetts.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Wisconsin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Wisconsin. Moving to the 2015 IECC from the 2006 IECC base code is cost-effective for residential buildings in all climate zones in Wisconsin.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Ohio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in Ohio. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Ohio.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Illinois
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Illinois. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Illinois.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for Rhode Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Rhode Island. Moving to the 2015 IECC from the 2012 IECC base code is cost-effective for residential buildings in all climate zones in Rhode Island.
Cost-Effectiveness Analysis of the Residential Provisions of the 2015 IECC for South Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
2016-02-15
The 2015 IECC provides cost-effective savings for residential buildings in South Carolina. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in South Carolina.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Arizona. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Arizona.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Hawaii. Moving to the 2015 IECC from the 2006 IECC base code is cost-effective for residential buildings in all climate zones in Hawaii.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in Connecticut. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in Connecticut.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in New York. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Zhao, Mingjie; Taylor, Zachary T.
The 2015 IECC provides cost-effective savings for residential buildings in New Mexico. Moving to the 2015 IECC from the 2009 IECC base code is cost-effective for residential buildings in all climate zones in New Mexico.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yunzhi; Gowri, Krishnan
2011-02-28
This report summarizes code requirements and energy savings of commercial buildings in Climate Zone 2B built to the 2009 IECC and ASHRAE Standard 90.1-2007 when compared to the 2003 IECC and the 2006 IECC. In general, the 2009 IECC and ASHRAE Standard 90.1-2007 have higher insulation requirements for exterior walls, roof, and windows and have higher efficiency requirements for HVAC equipment. HVAC equipment efficiency requirements are governed by National Appliance Conversion Act of 1987 (NAECA), and are applicable irrespective of the IECC version adopted. The energy analysis results show that commercial buildings meeting the 2009 IECC requirements save 4.4% tomore » 9.5% site energy and 4.1% to 9.9% energy cost when compared to the 2006 IECC; and save 10.6% to 29.4% site energy and 10.3% to 29.3% energy cost when compared to the 2003 IECC. Similar analysis comparing ASHRAE Standard 90.1-2007 requirements to the 2006 IECC shows that the energy savings are in the 4.0% to 10.7% for multi-family and retail buildings, but less than 2% for office buildings. Further comparison of ASHRAE Standard 90.1-2007 requirements to the 2003 IECC show site energy savings in the range of 7.7% to 30.6% and energy cost savings range from 7.9% to 30.3%. Both the 2009 IECC and ASHRAE Standard 90.1-2007 have the potential to save energy by comparable levels for most building types.« less
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Consortium for Advanced Residential Building's (CARB’s) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
Challenges of Achieving 2012 IECC Air Sealing Requirements in Multifamily Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klocke, S.; Faakye, O.; Puttagunta, S.
2014-10-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient by itself. In addition, the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-risemore » multifamily dwellings. While this air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of CARB's multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in 3 multifamily buildings.« less
75 FR 54131 - Updating State Residential Building Energy Efficiency Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-03
... and 95 degrees F for heating (for heat pumps), the 2000 IECC insulation requirement for supply ducts in unconditioned spaces is R-5 (minimum) for nearly all cases. Insulation required by the 2000 IECC... Duct Insulation Requirements Duct insulation requirements generally increased in the 2003 IECC. The...
Energy and Energy Cost Savings Analysis of the 2015 IECC for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jian; Xie, YuLong; Athalye, Rahul A.
As required by statute (42 USC 6833), DOE recently issued a determination that ANSI/ASHRAE/IES Standard 90.1-2013 would achieve greater energy efficiency in buildings subject to the code compared to the 2010 edition of the standard. Pacific Northwest National Laboratory (PNNL) conducted an energy savings analysis for Standard 90.1-2013 in support of its determination . While Standard 90.1 is the model energy standard for commercial and multi-family residential buildings over three floors (42 USC 6833), many states have historically adopted the International Energy Conservation Code (IECC) for both residential and commercial buildings. This report provides an assessment as to whether buildingsmore » constructed to the commercial energy efficiency provisions of the 2015 IECC would save energy and energy costs as compared to the 2012 IECC. PNNL also compared the energy performance of the 2015 IECC with the corresponding Standard 90.1-2013. The goal of this analysis is to help states and local jurisdictions make informed decisions regarding model code adoption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-11-01
While previous versions of the International Energy Conservation Code (IECC) have included provisions to improve the air tightness of dwellings, for the first time, the 2012 IECC mandates compliance verification through blower door testing. Simply completing the Air Barrier and Insulation Installation checklist through visual inspection is no longer sufficient; the 2012 IECC mandates a significantly stricter air sealing requirement. In Climate Zones 3 through 8, air leakage may not exceed 3 ACH50, which is a significant reduction from the 2009 IECC requirement of 7 ACH50. This requirement is for all residential buildings, which includes low-rise multifamily dwellings. While thismore » air leakage rate requirement is an important component to achieving an efficient building thermal envelope, currently, the code language doesn't explicitly address differences between single family and multifamily applications. In addition, the 2012 IECC does not provide an option to sample dwellings for larger multifamily buildings, so compliance would have to be verified on every unit. With compliance with the 2012 IECC air leakage requirements on the horizon, several of Building America team Consortium for Advanced Residential Building's (CARB) multifamily builder partners are evaluating how best to comply with this requirement. Builders are not sure whether it is more practical or beneficial to simply pay for guarded testing or to revise their air sealing strategies to improve compartmentalization to comply with code requirements based on unguarded blower door testing. This report summarizes CARB's research that was conducted to assess the feasibility of meeting the 2012 IECC air leakage requirements in three multifamily buildings.« less
The Marriage of Residential Energy Codes and Rating Systems: Conflict Resolution or Just Conflict?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Zachary T.; Mendon, Vrushali V.
2014-08-21
After three decades of coexistence at a distance, model residential energy codes and residential energy rating systems have come together in the 2015 International Energy Conservation Code. At the October, 2013, International Code Council’s Public Comment Hearing, a new compliance path based on an Energy Rating Index was added to the IECC. Although not specifically named in the code, RESNET’s HERS rating system is the likely candidate Index for most jurisdictions. While HERS has been a mainstay in various beyond-code programs for many years, its direct incorporation into the most popular model energy code raises questions about the equivalence ofmore » a HERS-based compliance path and the traditional IECC performance compliance path, especially because the two approaches use different efficiency metrics, are governed by different simulation rules, and have different scopes with regard to energy impacting house features. A detailed simulation analysis of more than 15,000 house configurations reveals a very large range of HERS Index values that achieve equivalence with the IECC’s performance path. This paper summarizes the results of that analysis and evaluates those results against the specific Energy Rating Index values required by the 2015 IECC. Based on the home characteristics most likely to result in disparities between HERS-based compliance and performance path compliance, potential impacts on the compliance process, state and local adoption of the new code, energy efficiency in the next generation of homes subject to this new code, and future evolution of model code formats are discussed.« less
24 CFR 905.312 - Design and construction.
Code of Federal Regulations, 2014 CFR
2014-04-01
... constructed in compliance with: (1) A national building code, such as those developed by the International Code Council or the National Fire Protection Association; and the IECC or ASHRAE 90.1-2010 (both... a successor energy code or standard that has been adopted by HUD pursuant to 42 U.S.C. 12709 or...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Definitions. 435.2 Section 435.2 Energy DEPARTMENT OF... Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.2 Definitions. For... Loan Mortgage Corporation. ICC means International Code Council. IECC means International Energy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Changes in the International Energy Conservation Code (IECC) from 2009 to 2012 have resulted in the use of exterior rigid insulation becoming part of the prescriptive code requirements. With more jurisdictions adopting the 2012 IECC, builders will be required to incorporate exterior insulation in the construction of their exterior wall assemblies. For thick layers of exterior insulation (levels greater than 1.5 inches), the use of wood of furring strips attached through the insulation back to the structure has been used by many contractors and designers as a means to provide a convenient cladding attachment location. However, there has been resistancemore » to its widespread implementation due to a lack of research and understanding of the mechanisms involved and potential creep effects of the assembly under the sustained dead load of a cladding. This research conducted by Building Science Corporation evaluated the system mechanics and long-term performance of this technique.« less
Technical Support Document for Version 3.9.1 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2012-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Effects from the Reduction of Air Leakage on Energy and Durability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hun, Diana E.; Childs, Phillip W.; Atchley, Jerald Allen
2014-01-01
Buildings are responsible for approximately 40% of the energy used in the US. Codes have been increasing building envelope requirements, and in particular those related to improving airtightness, in order to reduce energy consumption. The main goal of this research was to evaluate the effects from reductions in air leakage on energy loads and material durability. To this end, we focused on the airtightness and thermal resistance criteria set by the 2012 International Energy Conservation Code (IECC).
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
Initial and Long-Term Movement of Cladding Installed Over Exterior Rigid Insulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Peter
Changes in the International Energy Conservation Code (IECC) from 2009 to 2012 have resulted in the use of exterior rigid insulation becoming part of the prescriptive code requirements. With more jurisdictions adopting the 2012 IECC builders will be required to incorporate exterior insulation in the construction of their exterior wall assemblies. For thick layers of exterior insulation (levels greater than 1.5 inches), the use of wood furring strips attached through the insulation back to the structure has been used by many contractors and designers as a means to provide a convenient cladding attachment location. This research was an extension onmore » previous research conducted by Building Science Corporation in 2011, and 2012. Each year the understanding of the system discrete load component interactions, as well as impacts of environmental loading, has increased. The focus of the research was to examine more closely the impacts of screw fastener bending on the total system capacity, effects of thermal expansion and contraction of materials on the compressive forces in the assembly, as well as to analyze a full year’s worth of cladding movement data from assemblies constructed in an exposed outdoor environment.« less
Building America Top Innovations 2012: Building Science-Based Climate Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2013-01-01
This Building America Top Innovations profile describes the Building America-developed climate zone map, which serves as a consistent framework for energy-efficiency requirements in the national model energy code starting with the 2004 IECC Supplement and the ASHRAE 90.1 2004 edition. The map also provides a critical foundation for climate-specific guidance in the widely disseminated EEBA Builder Guides and Building America Best Practice Guides.
Field Testing of Compartmentalization Methods for Multifamily Construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, K.; Lstiburek, J. W.
2015-03-01
The 2012 International Energy Conservation Code (IECC) has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure (3 ACH50) for single-family and multifamily construction (in climate zones 3–8). The Leadership in Energy & Environmental Design certification program and ASHRAE Standard 189 have comparable compartmentalization requirements. ASHRAE Standard 62.2 will soon be responsible for all multifamily ventilation requirements (low rise and high rise); it has an exceptionally stringent compartmentalization requirement. These code and program requirements are driving the need for easier and more effective methods of compartmentalization in multifamily buildings.
Measure Guideline: Implementing a Plenum Truss for a Compact Air Distribution System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, A.
2013-10-01
This Measure Guideline presents the steps to implement a compact duct system inside an attic bulkhead (plenum truss) of a one-story, slab-on-grade (SOG) home. In a compact duct design, ductwork runs are reduced in length to yield a smaller and more compact duct system. Less energy will be lost through ductwork if the ducts are contained within the thermal enclosure of the house. These measures are intended for the production builder working to meet the 2012 International Energy Conservation Code (IECC) requirements and keep the ductwork within the thermal enclosure of the house. This measure of bringing the heating, ventilationmore » and air conditioning (HVAC) equipment and ductwork within the thermal enclosure of the house is appropriate for the builder wishing to avoid cathedralizing the insulation in the attic space (i.e., locating it at the underside of the roof deck rather than along the attic floor) or adding dropped soffits.« less
Measure Guideline: Implementing a Plenum Truss for a Compact Air Distribution System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, A.
2013-10-01
This Measure Guideline presents the steps to implement a compact duct system inside an attic bulkhead (plenum truss) of a one-story, slab-on-grade home. In a compact duct design, ductwork runs are reduced in length to yield a smaller and more compact duct system. Less energy will be lost through ductwork if the ducts are contained within the thermal enclosure of the house. These measures are intended for the production builder working to meet the 2012 International Energy Conservation Code (IECC) requirements and keep the ductwork within the thermal enclosure of the house. This measure of bringing the heating, ventilation andmore » air conditioning (HVAC) equipment and ductwork within the thermal enclosure of the house is appropriate for the builder wishing to avoid cathedralizing the insulation in the attic space (i.e., locating it at the underside of the roof deck rather than along the attic floor) or adding dropped soffits.« less
Initial and Long-Term Movement of Cladding Installed Over Exterior Rigid Insulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, P.
Changes in the International Energy Conservation Code (IECC) from 2009 to 2012 have resulted in the use of exterior rigid insulation becoming part of the prescriptive code requirements. With more jurisdictions adopting the 2012 IECC builders are going to finding themselves required to incorporate exterior insulation in the construction of their exterior wall assemblies. For thick layers of exterior insulation (levels greater than 1.5 inches), the use wood furring strips attached through the insulation back to the structure has been used by many contractors and designers as a means to provide a convenient cladding attachment location. However, there has beenmore » a significant resistance to its widespread implementation due to a lack of research and understanding of the mechanisms involved and potential creep effects of the assembly under the sustained dead load of a cladding. This research was an extension on previous research conducted by BSC in 2011, and 2012. Each year the understanding of the system discrete load component interactions, as well as impacts of environmental loading has increased. The focus of the research was to examine more closely the impacts of screw fastener bending on the total system capacity, effects of thermal expansion and contraction of materials on the compressive forces in the assembly, as well as to analyze a full years worth of cladding movement data from assemblies constructed in an exposed outdoor environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Changes in the International Energy Conservation Code (IECC) from 2009 to 2012 have resulted in the use of exterior rigid insulation becoming part of the prescriptive code requirements. With more jurisdictions adopting the 2012 IECC builders are going to finding themselves required to incorporate exterior insulation in the construction of their exterior wall assemblies. For thick layers of exterior insulation (levels greater than 1.5 inches), the use wood furring strips attached through the insulation back to the structure has been used by many contractors and designers as a means to provide a convenient cladding attachment location. However, there has beenmore » a significant resistance to its widespread implementation due to a lack of research and understanding of the mechanisms involved and potential creep effects of the assembly under the sustained dead load of a cladding. This research was an extension on previous research conducted by BSC in 2011, and 2012. Each year the understanding of the system discrete load component interactions, as well as impacts of environmental loading has increased. The focus of the research was to examine more closely the impacts of screw fastener bending on the total system capacity, effects of thermal expansion and contraction of materials on the compressive forces in the assembly, as well as to analyze a full years worth of cladding movement data from assemblies constructed in an exposed outdoor environment.« less
Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Advanced Envelope Research for Factory Built Housing, Phase 3. Design Development and Prototyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, E.; Kessler, B.; Mullens, M.
2014-01-01
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research -- stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less
Advanced Envelope Research for Factory Built Housing, Phase 3 -- Design Development and Prototyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, E.; Kessler, B.; Mullens, M.
2014-01-01
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research -- stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research - stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research — stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less
Integrating risk assessment and life cycle assessment: a case study of insulation.
Nishioka, Yurika; Levy, Jonathan I; Norris, Gregory A; Wilson, Andrew; Hofstetter, Patrick; Spengler, John D
2002-10-01
Increasing residential insulation can decrease energy consumption and provide public health benefits, given changes in emissions from fuel combustion, but also has cost implications and ancillary risks and benefits. Risk assessment or life cycle assessment can be used to calculate the net impacts and determine whether more stringent energy codes or other conservation policies would be warranted, but few analyses have combined the critical elements of both methodologies In this article, we present the first portion of a combined analysis, with the goal of estimating the net public health impacts of increasing residential insulation for new housing from current practice to the latest International Energy Conservation Code (IECC 2000). We model state-by-state residential energy savings and evaluate particulate matter less than 2.5 microm in diameter (PM2.5), NOx, and SO2 emission reductions. We use past dispersion modeling results to estimate reductions in exposure, and we apply concentration-response functions for premature mortality and selected morbidity outcomes using current epidemiological knowledge of effects of PM2.5 (primary and secondary). We find that an insulation policy shift would save 3 x 10(14) British thermal units or BTU (3 x 10(17) J) over a 10-year period, resulting in reduced emissions of 1,000 tons of PM2.5, 30,000 tons of NOx, and 40,000 tons of SO2. These emission reductions yield an estimated 60 fewer fatalities during this period, with the geographic distribution of health benefits differing from the distribution of energy savings because of differences in energy sources, population patterns, and meteorology. We discuss the methodology to be used to integrate life cycle calculations, which can ultimately yield estimates that can be compared with costs to determine the influence of external costs on benefit-cost calculations.
Small Changes Yield Large Results at NIST's Net-Zero Energy Residential Test Facility.
Fanney, A Hunter; Healy, William; Payne, Vance; Kneifel, Joshua; Ng, Lisa; Dougherty, Brian; Ullah, Tania; Omar, Farhad
2017-12-01
The Net-Zero Energy Residential Test Facility (NZERTF) was designed to be approximately 60 % more energy efficient than homes meeting the 2012 International Energy Conservation Code (IECC) requirements. The thermal envelope minimizes heat loss/gain through the use of advanced framing and enhanced insulation. A continuous air/moisture barrier resulted in an air exchange rate of 0.6 air changes per hour at 50 Pa. The home incorporates a vast array of extensively monitored renewable and energy efficient technologies including an air-to-air heat pump system with a dedicated dehumidification cycle; a ducted heat-recovery ventilation system; a whole house dehumidifier; a photovoltaic system; and a solar domestic hot water system. During its first year of operation the NZERTF produced an energy surplus of 1023 kWh. Based on observations during the first year, changes were made to determine if further improvements in energy performance could be obtained. The changes consisted of installing a thermostat that incorporated control logic to minimize the use of auxiliary heat, using a whole house dehumidifier in lieu of the heat pump's dedicated dehumidification cycle, and reducing the ventilation rate to a value that met but did not exceed code requirements. During the second year of operation the NZERTF produced an energy surplus of 2241 kWh. This paper describes the facility, compares the performance data for the two years, and quantifies the energy impact of the weather conditions and operational changes.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
Energy analysis of cool, medium, and dark roofs on residential buildings in the U.S
NASA Astrophysics Data System (ADS)
Dunbar, Michael A.
This study reports an energy analysis of cool, medium, and dark roofs on residential buildings in the U.S. Three analyses were undertaken in this study: energy consumption, economic analysis, and an environmental analysis. The energy consumption reports the electricity and natural gas consumption of the simulations. The economic analysis uses tools such as simple payback period (SPP) and net present value (NPV) to determine the profitability of the cool roof and the medium roof. The variable change for each simulation model was the roof color. The default color was a dark roof and the results were focused on the changes produced by the cool roof and the medium roof. The environmental analysis uses CO2 emissions to assess the environmental impact of the cool roof and the medium roof. The analysis uses the U.S. Department of Energy (DOE) EnergyPlus software to produce simulations of a typical, two-story residential home in the U.S. The building details of the typical, two-story U.S. residential home and the International Energy Conservation Code (IECC) building code standards used are discussed in this study. This study indicates that, when material and labor costs are. assessed, the cool roof and the medium roof do not yield a SPP less than 10 years. Furthermore, the NPV results assess that neither the cool roof nor the medium roof are a profitable investment in any climate zone in the U.S. The environmental analysis demonstrates that both the cool roof and the medium roof have a positive impact in warmer climates by reducing the CO2 emissions as much as 264 kg and 129 kg, respectively.
Technical Support Document for Version 3.9.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2011-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-09-01
This best practices guide is the 15th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-09-01
This best practices guide is the 16th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
Deep Energy Retrofits - Eleven California Case Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Less, Brennan; Fisher, Jeremy; Walker, Iain
2012-10-01
This research documents and demonstrates viable approaches using existing materials, tools and technologies in owner-conducted deep energy retrofits (DERs). These retrofits are meant to reduce energy use by 70% or more, and include extensive upgrades to the building enclosure, heating, cooling and hot water equipment, and often incorporate appliance and lighting upgrades as well as the addition of renewable energy. In this report, 11 Northern California (IECC climate zone 3) DER case studies are described and analyzed in detail, including building diagnostic tests and end-use energy monitoring results. All projects recognized the need to improve the home and its systemsmore » approximately to current building code-levels, and then pursued deeper energy reductions through either enhanced technology/ building enclosure measures, or through occupant conservation efforts, both of which achieved impressive energy performance and reductions. The beyond-code incremental DER costs averaged $25,910 for the six homes where cost data were available. DERs were affordable when these incremental costs were financed as part of a remodel, averaging a $30 per month increase in the net-cost of home ownership.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
Tom Walsh & Company’s homes in an urban infill project in Portland achieved meets 2012 IECC insulation requirements in the marine climate with R-21 fiberglass batt walls, R-25 slab insulation and R-49 spray foam and cellulose attic floors.
Measuring effects of climate change and energy efficiency regulations in U.S. households
NASA Astrophysics Data System (ADS)
Koirala, Bishwa Shakha
The first chapter explains the human causes of climate change and its costs, which is estimated to be about 3.6% of GDP by the end of 21 st century (NRDC, 2008). The second chapter investigates how projected July temperatures will increase the demand for electricity in the U.S. by 0.8%, while projected January temperatures will decrease the demand for natural gas and heating oil by 1% and 2.3%, respectively. This chapter further examines effects of the energy-efficiency building codes: IECC 2003 and IECC 2006 in the U.S. in reducing the energy consumption in the U.S. households. This study finds that these state-level building codes are effective in reducing energy demand. Adoption of these codes reduces the electricity demand by 1.8%, natural gas by 1.3% and heating oil by 2.8%. A total of about 7.54 MMT per year emission reduction of CO2 is possible from the residential sector by applying such energy-efficiency building codes. This chapter further estimates an average of 1,342 kWh/Month of electricity consumption, 3,429 CFt/Month of natural gas consumption and 277 Gallon/Year of heating oil consumption per household. It also indentifies the existence of state heterogeneity that affects household level energy demand, and finds that assumption of independence of error term is violated. Chapter 3 estimates the implicit prices of climate in dollar by analyzing the hedonic rent and wage models for homeowners and apartment renters. The estimated results show that January temperature is a disamenity for which both homeowners and renters are being compensated (negative marginal willingness to pay) through U.S. by 16 and 25 at the 2004 price level per month, respectively. It also finds that the January temperature is productive, whereas the July temperatures and annual precipitation are amenities and less productive. This study suggests that households would be willing to pay for higher temperature and increased precipitation; the estimated threshold point for July temperature is 75°F and for annual precipitation is 50 inches. It further reports that homeowners pay more than renters for climate amenities in the Northeast and West with reference to the Midwest; where as in the South, these values do not differ much, suggesting that firms have incentive to invest in those regions. This chapter also identifies that both the housing and labor markets are segmented across the regions in the U.S. Chapter 4 uses meta-analysis to explore the environmental Kuznets curve (EKC) relationship for CO2 and several other environmental quality measures. Results indicate the presence of an EKC-type relationship for CO 2 and other environmental quality measures in relative terms. However, the predicted value of income turning point for CO2 is both extremely large in relative terms (about 10 times the world GDP per capita at the 2007 price level) and far outside the range of the data. Therefore, this study cannot accept the existence of the EKC relationship for the CO2.
Best Practices Case Study: Schneider Homes, Inc. - Village at Miller Creek, Burien, W
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2010-09-01
Case study of Schneider Homes, who achieved 50% savings over the 2004 IECC with analysis and recommendations from DOE’s Building America including moving ducts and furnace into conditioned space, R-23 blown fiberglass in the walls and R-38 in the attics, and high-performance HVAC, lighting, appliances, and windows.
DOE Zero Energy Ready Home Case Study: Caldwell and Johnson, Exeter, Rhode Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
This house, constructed by Caldwell and Johnson, should save its owners $600 per year over the 2009 IECC with the help of efficiency measures such as walls with OSB sheathing and R-13 open cell spray foam insulation. The home garnered a 2013 Housing Innovation Award in the custom builder category.
Best Practices Case Study: Tom Walsh and Co. - New Columbus, Portland, OR
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2010-09-01
Case study of Tom Walsh, who achieved 50% in heating and cooling energy savings over the 2004 IECC with advanced framing, superior air sealing, extra insulation, and ducts in conditioned space. Surface water runoff in the large urban rebuild development was handled with pervious pavers, swales, retention of existing trees, and green spaces.
New Whole-House Solutions Case Study: Schneider Homes, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2013-02-01
Schneider Homes cut energy use by 50% over the 2004 IECC on 28 homes built near Seattle in 2008. Schneider packed the walls with R-23 of blown fiberglass and blanketed the ceiling with R-38 of blown cellulose. Ducts went into conditioned space through open-web floor trusses between floors and air handlers went into an air sealed garage closet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2015-01-01
The 2012 IECC has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure for both single family and multifamily construction in Climate Zones 3-8. Other programs (LEED, ASHRAE 189, ASHRAE 62.2) have similar or tighter compartmentalization requirements, thus driving the need for easier and more effective methods of compartmentalization in multifamily buildings.
Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan
2016-02-01
To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P < 0.001, P < 0.001, P = 0.020, P = 0.002) and to all other groups combined (P < 0.001). Asymptomatic indications by ICD codes yielded higher rates of abnormal CDU than symptomatic indications. This finding is inconsistent with clinical experience and historical data, and we suggest that inaccurate coding may play a role. Limiting reimbursement for CDU in low-yield groups is reasonable. However, reimbursement policies based on ICD coding, for example, limiting payment for asymptomatic ICD codes, may impede use of CDU in high-yield patient groups. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, E.; Mullens, M.; Rath, P.
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective envelope designs that can be effectively integrated into the plant production process while meeting the thermal requirements of the 2012 IECC standards. This work is part of a multiphase effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three methods for building high performance walls. Phase 2 focused on developing viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped narrow the research focus to perfecting a stud wall designmore » with exterior continuous insulation (CI). This report describes Phase 3, which was completed in two stages and continued the design development effort, exploring and evaluating a range or methods for applying CI to factory built homes. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing. During this phase, a home was built with CI, evaluated, and placed in service. The experience of building a mock up wall section with CI and then constructing on line a prototype home resolved important concerns about how to integrate the material into the production process. First steps were taken toward finding least expensive approaches for incorporating CI in standard factory building practices and a preliminary assessment suggested that even at this early stage the technology is attractive when viewed from a life cycle cost perspective.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puttagunta, S.; Grab, J.; Williamson, J.
Working with builder partners on test homes allows for vetting of whole-house building strategies to eliminate any potential unintended consequences prior to implementing these solution packages on a production scale. To support this research, CARB partnered with Preferred Builders Inc. on a high-performance test home in Old Greenwich, CT. The philosophy and science behind the 2,700 ft2 "Performance House" was based on the premise that homes should be safe, healthy, comfortable, durable, efficient, and adapt with the homeowners. The technologies and strategies used in the "Performance House" were not cutting-edge, but simply "best practices practiced". The focus was on simplicitymore » in construction, maintenance, and operation. When seeking a 30% source energy savings targets over a comparable 2009 IECC code-built home in the cold climate zone, nearly all components of a home must be optimized. Careful planning and design are critical. To help builders and architects seeking to match the performance of this home, a step-by-step guide through the building shell components of DOE's Challenge Home are provided in a pictorial story book. The end result was a DOE Challenge Home that achieved a HERS Index Score of 20 (43 without PV, the minimum target was 55 for compliance). This home was also awarded the 2012 HOBI for Best Green Energy Efficient Home from the Home Builders & Remodelers Association of Connecticut.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puttagunta, S.; Grab, J.; Williamson, J.
Working with builder partners on a test homes allows for vetting of whole-house building strategies to eliminate any potential unintended consequences prior to implementing these solution packages on a production scale. To support this research, CARB partnered with Preferred Builders Inc. on a high-performance test home in Old Greenwich, CT. The philosophy and science behind the 2,700 ft2 'Performance House' was based on the premise that homes should be safe, healthy, comfortable, durable, efficient, and adapt with the homeowners. The technologies and strategies used in the 'Performance House' were not cutting-edge, but simply 'best practices practiced'. The focus was onmore » simplicity in construction, maintenance, and operation. When seeking a 30% source energy savings targets over a comparable 2009 IECC code-built home in the cold climate zone, nearly all components of a home must be optimized. Careful planning and design are critical. To help builders and architects seeking to match the performance of this home, a step-by-step guide through the building shell components of DOE's Challenge Home are provided in a pictorial story book. The end result was a DOE Challenge Home that achieved a HERS Index Score of 20 (43 without PV, the minimum target was 55 for compliance). This home was also awarded the 2012 HOBI for Best Green Energy Efficient Home from the Home Builders & Remodelers Association of Connecticut.« less
Evolution of plastic anisotropy for high-strain-rate computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.; Maudlin, P.J.
1994-12-01
A model for anisotropic material strength, and for changes in the anisotropy due to plastic strain, is described. This model has been developed for use in high-rate, explicit, Lagrangian multidimensional continuum-mechanics codes. The model handles anisotropies in single-phase materials, in particular the anisotropies due to crystallographic texture--preferred orientations of the single-crystal grains. Textural anisotropies, and the changes in these anisotropies, depend overwhelmingly no the crystal structure of the material and on the deformation history. The changes, particularly for a complex deformations, are not amenable to simple analytical forms. To handle this problem, the material model described here includes a texturemore » code, or micromechanical calculation, coupled to a continuum code. The texture code updates grain orientations as a function of tensor plastic strain, and calculates the yield strength in different directions. A yield function is fitted to these yield points. For each computational cell in the continuum simulation, the texture code tracks a particular set of grain orientations. The orientations will change due to the tensor strain history, and the yield function will change accordingly. Hence, the continuum code supplies a tensor strain to the texture code, and the texture code supplies an updated yield function to the continuum code. Since significant texture changes require relatively large strains--typically, a few percent or more--the texture code is not called very often, and the increase in computer time is not excessive. The model was implemented, using a finite-element continuum code and a texture code specialized for hexagonal-close-packed crystal structures. The results for several uniaxial stress problems and an explosive-forming problem are shown.« less
Development of Yield and Tensile Strength Design Curves for Alloy 617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy Lybeck; T. -L. Sham
2013-10-01
The U.S. Department of Energy Very High Temperature Reactor Program is acquiring data in preparation for developing an Alloy 617 Code Case for inclusion in the nuclear section of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code. A draft code case was previously developed, but effort was suspended before acceptance by ASME. As part of the draft code case effort, a database was compiled of yield and tensile strength data from tests performed in air. Yield strength and tensile strength at temperature are used to set time independent allowable stress for construction materials in B&PVmore » Code, Section III, Subsection NH. The yield and tensile strength data used for the draft code case has been augmented with additional data generated by Idaho National Laboratory and Oak Ridge National Laboratory in the U.S. and CEA in France. The standard ASME Section II procedure for generating yield and tensile strength at temperature is presented, along with alternate methods that accommodate the change in temperature trends seen at high temperatures, resulting in a more consistent design margin over the temperature range of interest.« less
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Fission yield and criticality excursion code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
2000-06-30
The ANSI/ANS 8.3 standard allows a maximum yield not to exceed 2 x 10 fissions to calculate requiring the alarm system to be effective. It is common practice to use this allowance or to develop some other yield based on past criticality accident history or excursion experiments. The literature on the subject of yields discusses maximum yields larger and somewhat smaller than the ANS 8.3 permissive value. The ability to model criticality excursions and vary the various parameters to determine a credible maximum yield for operational specific cases has been available for some time but is not in common usemore » by criticality safety specialists. The topic of yields for various solution, metal, oxide powders, etc. in various geometry's and containers has been published by laboratory specialists or university staff and students for many decades but have not been available to practitioners. The need for best-estimate calculations of fission yields with a well-validated criticality excursion code has long been recognized. But no coordinated effort has been made so far to develop a generalized and well-validated excursion code for different types of systems. In this paper, the current practices to estimate fission yields are summarized along with its shortcomings for the 12-Rad zone (at SRS) and Criticality Alarm System (CAS) calculations. Finally the need for a user-friendly excursion code is reemphasized.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, E.; Mullens, M.; Rath, P.
The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective envelope designs that can be effectively integrated into the plant production process while meeting the thermal requirements of the 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing new envelope technologies. This work is part of a multi-phase effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three methods for building high performance walls. Phase 2 focused on developing viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysismore » of the three selected options. An industry advisory committee helped narrow the research focus to perfecting a stud wall design with exterior continuous insulation (CI). Phase 3, completed in two stages, continued the design development effort, exploring and evaluating a range or methods for applying CI to factory built homes. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing. During this phase, a home was built with CI, evaluated, and placed in service. The experience of building a mock up wall section with CI and then constructing on line a prototype home resolved important concerns about how to integrate the material into the production process. First steps were taken toward finding least expensive approaches for incorporating CI in standard factory building practices and a preliminary assessment suggested that even at this early stage the technology is attractive when viewed from a life cycle cost perspective.« less
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
NASA Astrophysics Data System (ADS)
Kajimoto, T.; Shigyo, N.; Sanami, T.; Iwamoto, Y.; Hagiwara, M.; Lee, H. S.; Soha, A.; Ramberg, E.; Coleman, R.; Jensen, D.; Leveling, A.; Mokhov, N. V.; Boehnlein, D.; Vaziri, K.; Sakamoto, Y.; Ishibashi, K.; Nakashima, H.
2014-10-01
The energy spectra of neutrons were measured by a time-of-flight method for 120 GeV protons on thick graphite, aluminum, copper, and tungsten targets with an NE213 scintillator at the Fermilab Test Beam Facility. Neutron energy spectra were obtained between 25 and 3000 MeV at emission angles of 30°, 45°, 120°, and 150°. The spectra were parameterized as neutron emissions from three moving sources and then compared with theoretical spectra calculated by PHITS and FLUKA codes. The yields of the theoretical spectra were substantially underestimated compared with the yields of measured spectra. The integrated neutron yields from 25 to 3000 MeV calculated with PHITS code were 16-36% of the experimental yields and those calculated with FLUKA code were 26-57% of the experimental yields for all targets and emission angles.
TEMPEST code modifications and testing for erosion-resisting sludge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onishi, Y.; Trent, D.S.
The TEMPEST computer code has been used to address many waste retrieval operational and safety questions regarding waste mobilization, mixing, and gas retention. Because the amount of sludge retrieved from the tank is directly related to the sludge yield strength and the shear stress acting upon it, it is important to incorporate the sludge yield strength into simulations of erosion-resisting tank waste retrieval operations. This report describes current efforts to modify the TEMPEST code to simulate pump jet mixing of erosion-resisting tank wastes and the models used to test for erosion of waste sludge with yield strength. Test results formore » solid deposition and diluent/slurry jet injection into sludge layers in simplified tank conditions show that the modified TEMPEST code has a basic ability to simulate both the mobility and immobility of the sludges with yield strength. Further testing, modification, calibration, and verification of the sludge mobilization/immobilization model are planned using erosion data as they apply to waste tank sludges.« less
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Scherzinger, William M.
2016-05-01
The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.
Annual Performance Evaluation of a Pair of Energy Efficient Houses (WC3 and WC4) in Oak Ridge, TN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Kaushik; Christian, Jeffrey E; Gehl, Anthony C
2012-04-01
Beginning in 2008, two pairs of energy-saver houses were built at Wolf Creek in Oak Ridge, TN. These houses were designed to maximize energy efficiency using new ultra-high-efficiency components emerging from ORNL s Cooperative Research and Development Agreement (CRADA) partners and others. The first two houses contained 3713 square feet of conditioned area and were designated as WC1 and WC2; the second pair consisted of 2721 square feet conditioned area with crawlspace foundation and they re called WC3 and WC4. This report is focused on the annual energy performance of WC3 and WC4, and how they compare against a previouslymore » benchmarked maximum energy efficient house of a similar footprint. WC3 and WC4 are both about 55-60% more efficient than traditional new construction. Each house showcases a different envelope system: WC3 is built with advanced framing featured cellulose insulation partially mixed with phase change materials (PCM); and WC4 house has cladding composed of an exterior insulation and finish system (EIFS). The previously benchmarked house was one of three built at the Campbell Creek subdivision in Knoxville, TN. This house (CC3) was designed as a transformation of a builder house (CC1) with the most advanced energy-efficiency features, including solar electricity and hot water, which market conditions are likely to permit within the 2012 2015 period. The builder house itself was representative of a standard, IECC 2006 code-certified, all-electric house built by the builder to sell around 2005 2008.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-02-01
A system of compact, buried ducts provides a high-performance and cost-effective solution for delivering conditioned air throughout the building. This report outlines research activities that are expected to facilitate adoption of compact buried duct systems by builders. The results of this research would be scalable to many new house designs in most climates and markets, leading to wider industry acceptance and building code and energy program approval. The primary research question with buried ducts is potential condensation at the outer jacket of the duct insulation in humid climates during the cooling season. Current best practices for buried ducts rely onmore » encapsulating the insulated ducts with closed-cell spray polyurethane foam insulation to control condensation and improve air sealing. The encapsulated buried duct concept has been analyzed and shown to be effective in hot-humid climates. The purpose of this project is to develop an alternative buried duct system that performs effectively as ducts in conditioned space - durable, energy efficient, and cost-effective - in a hot-humid climate (IECC warm-humid climate zone 3A) with three goals that distinguish this project: 1) Evaluation of design criteria for buried ducts that use common materials and do not rely on encapsulation using spray foam or disrupt traditional work sequences, 2) Establishing design criteria for compact ducts and incorporate those with the buried duct criteria to further reduce energy losses and control installed costs, and 3) Developing HVAC design guidance for performing accurate heating and cooling load calculations for compact buried ducts.« less
Compact Buried Ducts in a Hot-Humid Climate House
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallay, Dave
2016-01-07
"9A system of compact, buried ducts provides a high-performance and cost-effective solution for delivering conditioned air throughout the building. This report outlines research activities that are expected to facilitate adoption of compact buried duct systems by builders. The results of this research would be scalable to many new house designs in most climates and markets, leading to wider industry acceptance and building code and energy program approval. The primary research question with buried ducts is potential condensation at the outer jacket of the duct insulation in humid climates during the cooling season. Current best practices for buried ducts rely onmore » encapsulating the insulated ducts with closed-cell spray polyurethane foam insulation to control condensation and improve air sealing. The encapsulated buried duct concept has been analyzed and shown to be effective in hot-humid climates. The purpose of this project is to develop an alternative buried duct system that performs effectively as ducts in conditioned space - durable, energy efficient, and cost-effective - in a hot-humid climate (IECC warm-humid climate zone 3A) with three goals that distinguish this project: 1) Evaluation of design criteria for buried ducts that use common materials and do not rely on encapsulation using spray foam or disrupt traditional work sequences; 2) Establishing design criteria for compact ducts and incorporate those with the buried duct criteria to further reduce energy losses and control installed costs; 3) Developing HVAC design guidance for performing accurate heating and cooling load calculations for compact buried ducts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.
2011-02-01
This best practices guide is the twelfth in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the cold and very cold climates can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. Themore » best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and thos erequirements are highlighted in the text. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less
NASA Technical Reports Server (NTRS)
Dolinar, S.
1988-01-01
Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.
Fission yields data generation and benchmarks of decay heat estimation of a nuclear fuel
NASA Astrophysics Data System (ADS)
Gil, Choong-Sup; Kim, Do Heon; Yoo, Jae Kwon; Lee, Jounghwa
2017-09-01
Fission yields data with the ENDF-6 format of 235U, 239Pu, and several actinides dependent on incident neutron energies have been generated using the GEF code. In addition, fission yields data libraries of ORIGEN-S, -ARP modules in the SCALE code, have been generated with the new data. The decay heats by ORIGEN-S using the new fission yields data have been calculated and compared with the measured data for validation in this study. The fission yields data ORIGEN-S libraries based on ENDF/B-VII.1, JEFF-3.1.1, and JENDL/FPY-2011 have also been generated, and decay heats were calculated using the ORIGEN-S libraries for analyses and comparisons.
Estimating the Depth of the Navy Recruiting Market
2016-09-01
recommend that NRC make use of the Poisson regression model in order to determine high-yield ZIP codes for market depth. 14. SUBJECT...recommend that NRC make use of the Poisson regression model in order to determine high-yield ZIP codes for market depth. vi THIS PAGE INTENTIONALLY LEFT...DEPTH OF THE NAVY RECRUITING MARKET by Emilie M. Monaghan September 2016 Thesis Advisor: Lyn R. Whitaker Second Reader: Jonathan K. Alt
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation
NASA Technical Reports Server (NTRS)
Plante, Ianik; Wu, Honglu
2014-01-01
Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.
Moisture Durability Assessment of Selected Well-insulated Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pallin, Simon B.; Boudreaux, Philip R.; Kehrer, Manfred
2015-12-01
This report presents the results from studying the hygrothermal performance of two well-insulated wall assemblies, both complying with and exceeding international building codes (IECC 2015 2014, IRC 2015). The hygrothermal performance of walls is affected by a large number of influential parameters (e.g., outdoor and indoor climates, workmanship, material properties). This study was based on a probabilistic risk assessment in which a number of these influential parameters were simulated with their natural variability. The purpose of this approach was to generate simulation results based on laboratory chamber measurements that represent a variety of performances and thus better mimic realistic conditions.more » In total, laboratory measurements and 6,000 simulations were completed for five different US climate zones. A mold growth indicator (MGI) was used to estimate the risk of mold which potentially can cause moisture durability problems in the selected wall assemblies. Analyzing the possible impact on the indoor climate due to mold was not part of this study. The following conclusions can be reached from analyzing the simulation results. In a hot-humid climate, a higher R-value increases the importance of the airtightness because interior wall materials are at lower temperatures. In a cold climate, indoor humidity levels increase with increased airtightness. Air leakage must be considered in a hygrothermal risk assessment, since air efficiently brings moisture into buildings from either the interior or exterior environment. The sensitivity analysis of this study identifies mitigation strategies. Again, it is important to remark that MGI is an indicator of mold, not an indicator of indoor air quality and that mold is the most conservative indicator for moisture durability issues.« less
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
Subscale Development of Advanced ABM Graphite/Epoxy Composite Structure
1978-01-01
laminate analysis computer code (Reference 5). eie output of this code yields lamina stresses and strains, equivalent elastic and shear modulii for the...was not accounted for. Therefore the net effect was that the analysis tended to yield conservative results. For design purposes, this conservative...extracted using a Soxhlet Extraction apparatus, recycling the solvent af least 4 to 10 times every hour for a minimum of 6 hours. (4) All samples are
Effective Use of Weld Metal Yield Strength for HY-Steels
1983-01-01
Boiler and Pressure Vessel Code The ASME Boiler and Pressure Vessel Code (B&PV Code) is divided...As noted earlier, the ASME Boiler and Pressure Vessel Code makes only one exception to its overall philosophy of matching weld-metal strength and...material where toughness is of primary importance. REFERENCES American Society of Mechanical Engineers, Boiler and Pressure Vessel
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
The Passive House Challenge Home located in River Forest, Illinois, is a 5-bedroom, 4.5-bath, 3,600 ft2 two-story home (plus basement) that costs about $237 less per month to operate than a similar sized home built to the 2009 IECC. For a home with no solar photovoltaic panels installed, it scored an amazingly low 27 on the Home Energy Rating System (HERS) score.An ENERGY STAR-rated dishwasher, clothes washer, and refrigerator; an induction cooktop, condensing clothes dryer, and LED lighting are among the energy-saving devices inside the home. All plumbing fixtures comply with EPA WaterSense criteria. The home was awarded a 2013more » Housing Innovation Award in the "systems builder" category.« less
26 CFR 1.804-4 - Investment yield of a life insurance company.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Investment yield of a life insurance company. 1... insurance company for purposes of part I, subchapter L, chapter 1 of the Code. Investment yield means gross... deduction of investment expenses by a life insurance company in determining investment yield. “Investment...
26 CFR 1.804-4 - Investment yield of a life insurance company.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Investment yield of a life insurance company. 1... insurance company. (a) Investment yield defined. Section 804(c) defines the term “investment yield” of a life insurance company for purposes of part I, subchapter L, chapter 1 of the Code. Investment yield...
26 CFR 1.804-4 - Investment yield of a life insurance company.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 8 2014-04-01 2014-04-01 false Investment yield of a life insurance company. 1... insurance company. (a) Investment yield defined. Section 804(c) defines the term “investment yield” of a life insurance company for purposes of part I, subchapter L, chapter 1 of the Code. Investment yield...
26 CFR 1.804-4 - Investment yield of a life insurance company.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Investment yield of a life insurance company. 1... insurance company. (a) Investment yield defined. Section 804(c) defines the term “investment yield” of a life insurance company for purposes of part I, subchapter L, chapter 1 of the Code. Investment yield...
26 CFR 1.804-4 - Investment yield of a life insurance company.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 8 2011-04-01 2011-04-01 false Investment yield of a life insurance company. 1...) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Investment Income § 1.804-4 Investment yield of a life... life insurance company for purposes of part I, subchapter L, chapter 1 of the Code. Investment yield...
The POPOP4 library and codes for preparing secondary gamma-ray production cross sections
NASA Technical Reports Server (NTRS)
Ford, W. E., III
1972-01-01
The POPOP4 code for converting secondary gamma ray yield data to multigroup secondary gamma ray production cross sections and the POPOP4 library of secondary gamma ray yield data are described. Recent results of the testing of uranium and iron data sets from the POPOP4 library are given. The data sets were tested by comparing calculated secondary gamma ray pulse height spectra measured at the ORNL TSR-II reactor.
Development of 1D Particle-in-Cell Code and Simulation of Plasma-Wall Interactions
NASA Astrophysics Data System (ADS)
Rose, Laura P.
This thesis discusses the development of a 1D particle-in-cell (PIC) code and the analysis of plasma-wall interactions. The 1D code (Plasma and Wall Simulation -- PAWS) is a kinetic simulation of plasma done by treating both electrons and ions as particles. The goal of this thesis is to study near wall plasma interaction to better understand the mechanism that occurs in this region. The main focus of this investigation is the effects that secondary electrons have on the sheath profile. The 1D code is modeled using the PIC method. Treating both the electrons and ions as macroparticles the field is solved on each node and weighted to each macro particle. A pre-ionized plasma was loaded into the domain and the velocities of particles were sampled from the Maxwellian distribution. An important part of this code is the boundary conditions at the wall. If a particle hits the wall a secondary electron may be produced based on the incident energy. To study the sheath profile the simulations were run for various cases. Varying background neutral gas densities were run with the 2D code and compared to experimental values. Different wall materials were simulated to show their effects of SEE. In addition different SEE yields were run, including one study with very high SEE yields to show the presence of a space charge limited sheath. Wall roughness was also studied with the 1D code using random angles of incidence. In addition to the 1D code, an external 2D code was also used to investigate wall roughness without secondary electrons. The roughness profiles where created upon investigation of wall roughness inside Hall Thrusters based off of studies done on lifetime erosion of the inner and outer walls of these devices. The 2D code, Starfish[33], is a general 2D axisymmetric/Cartesian code for modeling a wide a range of plasma and rarefied gas problems. These results show that higher SEE yield produces a smaller sheath profile and that wall roughness produces a lower SEE yield. Modeling near wall interactions is not a simple or perfected task. Due to the lack of a second dimension and a sputtering model it is not possible with this study to show the positive effects wall roughness could have on Hall thruster performance since roughness occurs from the negative affect of sputtering.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
NASA Astrophysics Data System (ADS)
Fourtakas, G.; Rogers, B. D.
2016-06-01
A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.
Shaped Charge Jet Penetration of Discontinuous Media
1977-07-01
operational at the Ballistic1Research Laboratory. These codes are OIL, 1 TOIL, 2 DORF, 3 and HELP,4 ,5 which are Eulerian formulated, and HEMP ,6 which...ELastic Plastic ) is a FORTRAN code developed by Systems, Science and Software, Inc. It evolved from three major hydrodynamic codes previously developed...introduced into the treatment of moving surfaces. The HELP code, using the von Mises yield condition, treats materials as being elastic- plastic . The input for
A Literature Review of Sealed and Insulated Attics—Thermal, Moisture and Energy Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Less, Brennan; Walker, Iain; Levinson, Ronnen
In this literature review and analysis, we focus on the thermal, moisture and energy performance of sealed and insulated attics in California climates. Thermal. Sealed and insulated attics are expected to maintain attic air temperatures that are similar to those in the house within +/- 10°F. Thermal stress on the assembly, namely high shingle and sheathing temperatures, are of minimal concern. In the past, many sealed and insulated attics were constructed with insufficient insulation levels (~R-20) and with too much air leakage to outside, leading to poor thermal performance. To ensure high performance, sealed and insulated attics in new Californiamore » homes should be insulated at levels at least equivalent to the flat ceiling requirements in the code, and attic envelopes and ducts should be airtight. We expect that duct systems in well-constructed sealed and insulated attics should have less than 2% HVAC system leakage to outside. Moisture. Moisture risk in sealed and insulated California attics will increase with colder climate regions and more humid outside air in marine zones. Risk is considered low in the hot-dry, highly populated regions of the state, where most new home construction occurs. Indoor humidity levels should be controlled by following code requirements for continuous whole-house ventilation and local exhaust. Pending development of further guidance, we recommend that the air impermeable insulation requirements of the International Residential Code (2012) be used, as they vary with IECC climate region and roof finish. Energy. Sealed and insulated attics provide energy benefits only if HVAC equipment is located in the attic volume, and the benefits depend strongly on the insulation and airtightness of the attic and ducts. Existing homes with leaky, uninsulated ducts in the attic should have major savings. When compared with modern, airtight duct systems in a vented attic, sealed and insulated attics in California may still provide substantial benefit. Energy performance is expected to be roughly equivalent between sealed and insulated attics and prescriptive advanced roof/attic options in Title 24 2016. System performance can also be expected to improve, such as pull down time, performance at peak load, etc. We expect benefits to be reduced for all advanced roof/attic approaches, relative to a traditional vented attic, as duct system leakage is reduced close to 0. The most recent assessments, comparing advanced roof/attic assemblies to code compliant vented attics suggest average 13% TDV energy savings, with substantial variation by climate zone (more savings in more extreme climates). Similar 6-11% reductions in seasonally adjusted HVAC duct thermal losses have been measured in a small subset of such California homes using the ducts in conditioned space approach. Given the limited nature of energy and moisture monitoring in sealed and insulated attic homes, there is crucial need for long-term data and advanced modeling of these approaches in the California new and existing home contexts.« less
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...
2016-12-01
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Taiee; Bauer, Johannes M.; Liu, James C.
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Multiple Trellis Coded Modulation (MTCM): An MSAT-X report
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1986-01-01
Conventional trellis coding outputs one channel symbol per trellis branch. The notion of multiple trellis coding is introduced wherein more than one channel symbol per trellis branch is transmitted. It is shown that the combination of multiple trellis coding with M-ary modulation yields a performance gain with symmetric signal set comparable to that previously achieved only with signal constellation asymmetry. The advantage of multiple trellis coding over the conventional trellis coded asymmetric modulation technique is that the potential for code catastrophe associated with the latter has been eliminated with no additional cost in complexity (as measured by the number of states in the trellis diagram).
Separable concatenated codes with iterative map decoding for Rician fading channels
NASA Technical Reports Server (NTRS)
Lodge, J. H.; Young, R. J.
1993-01-01
Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.
A combinatorial model for dentate gyrus sparse coding
Severa, William; Parekh, Ojas; James, Conrad D.; ...
2016-12-29
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for twomore » notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.« less
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-21
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β(+)-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β(+)-activity and dose is not feasible, a simulation of the expected β(+)-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β(+)-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β(+)-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
NASA Astrophysics Data System (ADS)
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-01
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β+-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β+-activity and dose is not feasible, a simulation of the expected β+-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β+-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β+-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.
Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R
2006-02-28
The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.
Sputtering of rough surfaces: a 3D simulation study
NASA Astrophysics Data System (ADS)
von Toussaint, U.; Mutzke, A.; Manhard, A.
2017-12-01
The lifetime of plasma-facing components is critical for future magnetic confinement fusion power plants. A key process limiting the lifetime of the first-wall is sputtering by energetic ions. To provide a consistent modeling of the sputtering process of realistic geometries, the SDTrimSP-code has been extended to enable the processing of analytic as well as measured arbitrary 3D surface morphologies. The code has been applied to study the effect of varying the impact angle of ions on rough surfaces on the sputter yield as well as the influence of the aspect ratio of surface structures on the 2D distribution of the local sputtering yields. Depending on the surface morphologies reductions of the effective sputter yields to less than 25% have been observed in the simulation results.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2017-01-01
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Air Leakage of US Homes: Regression Analysis and Improvements from Retrofit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Wanyu R.; Joh, Jeffrey; Sherman, Max H.
2012-08-01
LBNL Residential Diagnostics Database (ResDB) contains blower door measurements and other diagnostic test results of homes in United States. Of these, approximately 134,000 single-family detached homes have sufficient information for the analysis of air leakage in relation to a number of housing characteristics. We performed regression analysis to consider the correlation between normalized leakage and a number of explanatory variables: IECC climate zone, floor area, height, year built, foundation type, duct location, and other characteristics. The regression model explains 68% of the observed variability in normalized leakage. ResDB also contains the before and after retrofit air leakage measurements of approximatelymore » 23,000 homes that participated in weatherization assistant programs (WAPs) or residential energy efficiency programs. The two types of programs achieve rather similar reductions in normalized leakage: 30% for WAPs and 20% for other energy programs.« less
A direct comparison of exoEarth yields for starshades and coronagraphs
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Cady, Eric J.; Clampin, Mark; Domagal-Goldman, Shawn; Lisman, Doug; Mandell, Avi M.; McElwain, Michael W.; Roberge, Aki; Robinson, Tyler D.; Savransky, Dmitry; Shaklan, Stuart B.; Stapelfeldt, Karl R.
2016-07-01
The scale and design of a future mission capable of directly imaging extrasolar planets will be influenced by the detectable number (yield) of potentially Earth-like planets. Currently, coronagraphs and starshades are being considered as instruments for such a mission. We will use a novel code to estimate and compare the yields for starshade- and coronagraph-based missions. We will show yield scaling relationships for each instrument and discuss the impact of astrophysical and instrumental noise on yields. Although the absolute yields are dependent on several yet-unknown parameters, we will present several limiting cases allowing us to bound the yield comparison.
Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.
2015-01-01
The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Chromium and titanium isotopes produced in photonuclear reactions of vanadium, revisited
NASA Astrophysics Data System (ADS)
Sakamoto, K.; Yoshida, M.; Kubota, Y.; Fukasawa, T.; Kunugise, A.; Hamajima, Y.; Shibata, S.; Fujiwara, I.
1989-10-01
Photonuclear production yields of 51Ti und 51,49,48Cr from 51V were redetermined for bremsstrahlung end-point energies ( E0) of 30 to 1000 or 1050 MeV with the aid of radiochemical separation of Cr. The yield curves for 51Ti, 51Cr, 49Cr and 48Cr show a clear evidence for two components in the production process; one tor secondary-proton reactions at E0 < Qπ and the other for photopion reactions, at E0 > Q, Qπ being Q-values for (γ, π +) and ( γ, π+xn) reactions. The contributions of the secondary reactions for production of the Ti and Cr isotopes at E0 > Qπ were then estimated by fitting calculated secondary yields to the observed ones at E0 < Qπ, and found to be about 40%, 20%, 4% and 4% for 51Ti, 51Cr, 49Cr and 48Cr, respectively, at E0 = 400 to 1000 MeV. The calculation of the secondary yields was based on the excitation functions for 51V(n, p) and (p, x'n) calculated with the ALICE code and the reported photoneutron and photoproton spectra from 12C and some other complex nuclei. The present results for 49Cr are close to the reported ones, while the present 48Cr yields differ by a factor of about 50. For the 51Ti and 51Cr yields, there are some discrepancies between the present and reported ones. The yield corrected for the secondaries, in units of μb/equivalent quantum, were unfolded into cross sections per photon, in units of μb, as a function ol monochromatic photon energy with the LOUHI-82 code. The results for the 51Ti and 49Cr are in disagreement in both the magnitude and shape with the theoretical predictions based on DWIA and PWIA. A Monte Carlo calculation based on the PICA code by Gabriel and Alsmiller does reproduce the gross feature of the present results.
Understanding Yield Anomalies in ICF Implosions via Fully Kinetic Simulations
NASA Astrophysics Data System (ADS)
Taitano, William
2017-10-01
In the quest towards ICF ignition, plasma kinetic effects are among prime candidates for explaining some significant discrepancies between experimental observations and rad-hydro simulations. To assess their importance, high-fidelity fully kinetic simulations of ICF capsule implosions are needed. Owing to the extremely multi-scale nature of the problem, kinetic codes have to overcome nontrivial numerical and algorithmic challenges, and very few options are currently available. Here, we present resolutions of some long-standing yield discrepancy conundrums using a novel, LANL-developed, 1D-2V Vlasov-Fokker-Planck code iFP. iFP possesses an unprecedented fidelity and features fully implicit time-stepping, exact mass, momentum, and energy conservation, and optimal grid adaptation in phase space, all of which are critically important for ensuring long-time numerical accuracy of the implosion simulations. Specifically, we concentrate on several anomalous yield degradation instances observed in Omega campaigns, with the so-called ``Rygg effect'', or an anomalous yield scaling with the fuel composition, being a prime example. Understanding the physical mechanisms responsible for such degradations in non-ignition-grade Omega experiments is of great interest, as such experiments are often used for platform and diagnostic development, which are then used in ignition-grade experiments on NIF. In the case of Rygg's experiments, effects of a kinetic stratification of fuel ions on the yield have been previously proposed as the anomaly explanation, studied with a kinetic code FPION, and found unimportant. We have revisited this issue with iFP and obtained excellent yield-over-clean agreement with the original Rygg results, and several subsequent experiments. This validates iFP and confirms that the kinetic fuel stratification is indeed at the root of the observed yield degradation. This work was sponsored by the Metropolis Postdoctoral Fellowship, LDRD office, Thermonuclear Burn Initiative of ASC, and the LANL Institutional Computing. This work was performed under the NNSA of the USDOE at LANL under contract DE-AC52-06NA25396.
Anisotropic Effective Moduli of Microcrack Damaged Media
2010-01-01
18) vanish. In this case applying the L’Hospital’s rule to Eq. (18) when h2 ? h1 yields the following:C44 l2 ¼ 1 C55 þ pg lðlþ C44Þ ðl þ C44Þ½1...RESEARCH TRIANGLE PARK NC 27709-2211 5 NAVAL RESEARCH LAB E R FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130 J S PERKINS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, C. Keith; Shen, Bo; Shrestha, Som S.
This report describes an analysis to investigate representative heating loads for single-family detached homes using current EnergyPlus simulations (DOE 2014a). Hourly delivered load results are used to determine binned load lines using US Department of Energy (DOE) residential prototype building models (DOE 2014b) developed by Pacific Northwest National Laboratory (PNNL). The selected residential single-family prototype buildings are based on the 2006 International Energy Conservation Code (IECC 2006) in the DOE climate regions. The resulting load lines are compared with the American National Standards Institute (ANSI)/Air-Conditioning, Heating, and Refrigeration Institute (AHRI) Standard 210/240 (AHRI 2008) minimum and maximum design heating requirementmore » (DHR) load lines of the heating seasonal performance factor (HSPF) ratings procedure for each region. The results indicate that a heating load line closer to the maximum DHR load line, and with a lower zero load ambient temperature, is more representative of heating loads predicted for EnergyPlus prototype residential buildings than the minimum DHR load line presently used to determine HSPF ratings. An alternative heating load line equation was developed and compared to binned load lines obtained from the EnergyPlus simulation results. The effect on HSPF of the alternative heating load line was evaluated for single-speed and two-capacity heat pumps, and an average HSPF reduction of 16% was found. The alternative heating load line relationship is tied to the rated cooling capacity of the heat pump based on EnergyPlus autosizing, which is more representative of the house load characteristics than the rated heating capacity. The alternative heating load line equation was found to be independent of climate for the six DOE climate regions investigated, provided an adjustable zero load ambient temperature is used. For Region IV, the default DOE climate region used for HSPF ratings, the higher load line results in an ~28% increase in delivered heating load and an ~52% increase in the estimated heating operating cost over that given in the AHRI directory (AHRI 2014).« less
Maximized exoEarth candidate yields for starshades
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.
2016-10-01
The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.
Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams
NASA Astrophysics Data System (ADS)
Ohya, Kaoru
2017-03-01
The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.
Optimizing Dense Plasma Focus Neutron Yields With Fast Gas Jets
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Stein, Elizabeth; Higginson, Drew; Kueny, Christopher; Link, Anthony; Schmidt, Andrea
2017-10-01
We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high-density jets are modeled in the large-eddy Navier-Stokes code CharlesX, which is suitable for modeling both sub-sonic and supersonic gas flow. The gas pattern, which is essentially static on z-pinch time scales, is imported from CharlesX to LSP for neutron yield predictions. Fast gas puffs allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of a subsonic jet increases relative to the background fill, we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density via super-sonic flow (also known as Mach diamonds) allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration and the necessary jet conditions for increasing neutron yield and reducing yield variability are explored. Simulations of realistic jet profiles are performed and compared to the ideal scenario. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the Laboratory Directed Research and Development Program (15-ERD-034) at LLNL.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Systematic screening for mutations in the promoter and the coding region of the 5-HT{sub 1A} gene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erdmann, J.; Shimron-Abarbanell, D.; Cichon, S.
1995-10-09
In the present study we sought to identify genetic variation in the 5-HT{sub 1A} receptor gene which through alteration of protein function or level of expression might contribute to the genetic predisposition to neuropsychiatric diseases. Genomic DNA samples from 159 unrelated subjects (including 45 schizophrenic, 46 bipolar affective, and 43 patients with Tourette`s syndrome, as well as 25 healthy controls) were investigated by single-strand conformation analysis. Overlapping PCR (polymerase chain reaction) fragments covered the whole coding sequence as well as the 5{prime} untranslated region of the 5-HT{sub 1A} gene. The region upstream to the coding sequence we investigated contains amore » functional promoter. We found two rare nucleotide sequence variants. Both mutations are located in the coding region of the gene: a coding mutation (A{yields}G) in nucleotide position 82 which leads to an amino acid exchange (Ile{yields}Val) in position 28 of the receptor protein and a silent mutation (C{yields}T) in nucleotide position 549. The occurrence of the Ile-28-Val substitution was studied in an extended sample of patients (n = 352) and controls (n = 210) but was found in similar frequencies in all groups. Thus, this mutation is unlikely to play a significant role in the genetic predisposition to the diseases investigated. In conclusion, our study does not provide evidence that the 5-HT{sub 1A} gene plays either a major or a minor role in the genetic predisposition to schizophrenia, bipolar affective disorder, or Tourette`s syndrome. 29 refs., 4 figs., 1 tab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less
FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.
2017-01-01
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.
Texture-induced anisotropy and high-strain rate deformation in metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.; Maudlin, P.J.
1990-01-01
We have used crystallographic texture calculations to model anisotropic yielding behavior for polycrystalline materials with strong preferred orientations and strong plastic anisotropy. Fitted yield surfaces were incorporated into an explicit Lagrangian finite-element code. We consider different anisotropic orientations, as well as different yield-surface forms, for Taylor cylinder impacts of hcp metals such as titanium and zirconium. Some deformed shapes are intrinsic to anisotropic response. Also, yield surface curvature, as distinct from strength anisotropy, has a strong influence on plastic flow. 13 refs., 5 figs.
Fission Reaction Event Yield Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona
FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).
The analysis of a nonsimilar laminar boundary layer
NASA Technical Reports Server (NTRS)
Stalmach, D. D.; Bertin, J. J.
1978-01-01
A computer code is described which yields accurate solutions for a broad range of laminar, nonsimilar boundary layers, providing the inviscid flow field is known. The boundary layer may be subject to mass injection for perfect-gas, nonreacting flows. If no mass injection is present, the code can be used with either perfect-gas or real-gas thermodynamic models. Solutions, ranging from two-dimensional similarity solutions to solutions for the boundary layer on the Space Shuttle Orbiter during reentry conditions, have been obtained with the code. Comparisons of these solutions, and others, with solutions presented in the literature; and with solutions obtained from other codes, demonstrate the accuracy of the present code.
Changes among Israeli Youth Movements: A Structural Analysis Based on Kahane's Code of Informality
ERIC Educational Resources Information Center
Cohen, Erik H.
2015-01-01
Multi-dimensional data analysis tools are applied to Reuven Kahane's data on the informality of youth organizations, yielding a graphic portrayal of Kahane's code of informality. This structure helps address questions of the whether the eight structural components exhaustively cover the field without redundancy. Further, the structure is used to…
Wu, F C; Zhang, H; Zhou, Q; Wu, M; Ballard, Z; Tian, Y; Wang, J Y; Niu, Z W; Huang, Y
2014-04-18
A method for site-specific and high yield modification of tobacco mosaic virus coat protein (TMVCP) utilizing a genetic code expanding technology and copper free cycloaddition reaction has been established, and biotin-functionalized virus-like particles were built by the self-assembly of the protein monomers.
Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian
2018-05-01
We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.
NASA Astrophysics Data System (ADS)
Engle, J. W.; Kelsey, C. T.; Bach, H.; Ballard, B. D.; Fassbender, M. E.; John, K. D.; Birnbaum, E. R.; Nortier, F. M.
2012-12-01
In order to ascertain the potential for radioisotope production and material science studies using the Isotope Production Facility at Los Alamos National Lab, a two-pronged investigation has been initiated. The Monte Carlo for Neutral Particles eXtended (MCNPX) code has been used in conjunction with the CINDER 90 burnup code to predict neutron flux energy distributions as a result of routine irradiations and to estimate yields of radioisotopes of interest for hypothetical irradiation conditions. A threshold foil activation experiment is planned to study the neutron flux using measured yields of radioisotopes, quantified by HPGe gamma spectroscopy, from representative nuclear reactions with known thresholds up to 50 MeV.
NASA Astrophysics Data System (ADS)
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
Automated Detection and Analysis of Interplanetary Shocks with Real-Time Application
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.
2006-12-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. Our goal is to provide an automated code that finds and analyzes interplanetary shocks as they occur for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. Although these codes can be automated in a reasonable manner to yield solutions not far from those obtained by user-directed interactive analysis, event detection presents an added obstacle and the first step in a fully automated analysis. We present a fully automated Rankine-Hugoniot analysis code that can scan the ACE science data, find shock candidates, analyze the events, obtain solutions in good agreement with those derived from interactive applications, and dismiss false positive shock candidates on the basis of the conservation equations. The intent is to make this code available to NOAA for use in real-time space weather applications. The code has the added advantage of being able to scan spacecraft data sets to provide shock solutions for use outside real-time applications and can easily be applied to science-quality data sets from other missions. Use of the code for this purpose will also be explored.
USDA-ARS?s Scientific Manuscript database
Coding/functional SNPs change the biological function of a gene and, therefore, could serve as “large-effect” genetic markers. In this study, we used two bioinformatics pipelines, GATK and SAMtools, for discovering coding/functional SNPs with allelic-imbalances associated with total body weight, mus...
Modification and benchmarking of MCNP for low-energy tungsten spectra.
Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M
2000-12-01
The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.
Yield and Blast Analyses with a Unified Theory of Explosions
1982-08-01
and afterburning of PBXN 103. The ambient conditions are for the test site at Socorro, NM, altitude -- 5200 feet. The input mass was 1038 pounds...essentially the warhead, most of which is PBXN -103. This was the very first test of the code. The relative yield is plFj.ed as T (time). The TOA yield, .90...YO =YO*1.0 ’Relative yields from earlier runs or fits 52 ’ 1 G = 106 cal = 4pi/3*le6 kg m^2/m^3/sec^2 53 AB=.00 ’Afterburning fraction 56 YO = Y0*(I
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
NASA Astrophysics Data System (ADS)
Leray, Olivier; Rochman, Dimitri; Fleming, Michael; Sublet, Jean-Christophe; Koning, Arjan; Vasiliev, Alexander; Ferroukhi, Hakim
2017-09-01
The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
NASA Astrophysics Data System (ADS)
Jafari, Mehdi; Kasaei, Shohreh
2012-01-01
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.
NASA Astrophysics Data System (ADS)
Jafari, Mehdi; Kasaei, Shohreh
2011-12-01
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.
NASA Astrophysics Data System (ADS)
Jaffke, Patrick; Möller, Peter; Stetcu, Ionel; Talou, Patrick; Schmitt, Christelle
2018-03-01
We implement fission fragment yields, calculated using Brownian shape-motion on a macroscopic-microscopic potential energy surface in six dimensions, into the Hauser-Feshbach statistical decay code CGMF. This combination allows us to test the impact of utilizing theoretically-calculated fission fragment yields on the subsequent prompt neutron and γ-ray emission. We draw connections between the fragment yields and the total kinetic energy TKE of the fission fragments and demonstrate that the use of calculated yields can introduce a difference in the 〈TKE〉 and, thus, the prompt neutron multiplicity
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Thermal and moisture problems in existing basements create a unique challenge as the exterior face of the wall is not easily or inexpensively accessible. This approach by the NorthernSTAR Building America Partnership team addresses thermal and moisture management from the interior face of the wall without disturbing the exterior soil and landscaping. It is effective at reducing energy loss through the wall principally during the heating season. The team conducted experiments at the Cloquet Residential Research Facility to test the heat and moisture performance of four hollow masonry block wall systems and two rim-joist systems. These systems were retrofitted withmore » interior insulation in compliance with the 2012 IECC. The research showed for the first time that, for masonry block walls in a cold climate, a solid bond beam or equivalent provides adequate resistance to moisture transport from a hollow core to the rim-joist cavity. Thus, a solid top course is a minimum requirement for an interior retrofit insulation system.« less
Potential of coded excitation in medical ultrasound imaging.
Misaridis, T X; Gammelmark, K; Jørgensen, C H; Lindberg, N; Thomsen, A H; Pedersen, M H; Jensen, J A
2000-03-01
Improvement in signal-to-noise ratio (SNR) and/or penetration depth can be achieved in medical ultrasound by using long coded waveforms, in a similar manner as in radars or sonars. However, the time-bandwidth product (TB) improvement, and thereby SNR improvement is considerably lower in medical ultrasound, due to the lower available bandwidth. There is still space for about 20 dB improvement in the SNR, which will yield a penetration depth up to 20 cm at 5 MHz [M. O'Donnell, IEEE Trans. Ultrason. Ferroelectr. Freq. Contr., 39(3) (1992) 341]. The limited TB additionally yields unacceptably high range sidelobes. However, the frequency weighting from the ultrasonic transducer's bandwidth, although suboptimal, can be beneficial in sidelobe reduction. The purpose of this study is an experimental evaluation of the above considerations in a coded excitation ultrasound system. A coded excitation system based on a modified commercial scanner is presented. A predistorted FM signal is proposed in order to keep the resulting range sidelobes at acceptably low levels. The effect of the transducer is taken into account in the design of the compression filter. Intensity levels have been considered and simulations on the expected improvement in SNR are also presented. Images of a wire phantom and clinical images have been taken with the coded system. The images show a significant improvement in penetration depth and they preserve both axial resolution and contrast.
Williams, David M
2010-09-01
Comments on the original article 'Are interventions theory-based? Development of a theory coding scheme' by Susan Michie and Andrew Prestwich (see record 2010-00152-001). In their admirable effort to develop a coding scheme for the theoretical contribution of intervention research, Michie and Prestwich rightly point out the importance of the presence of a comparison condition when examining the effect of an intervention on targeted theoretical variables and behavioral outcomes (Table 2, item 15). However, they fail to discuss the critical importance of the nature of the comparison condition. Weaker comparison conditions will yield stronger intervention effects; stronger comparison conditions will yield a stronger science of behavior change. (c) 2010 APA, all rights reserved).
Coding gains and error rates from the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.
Deep generative learning of location-invariant visual word recognition.
Di Bono, Maria Grazia; Zorzi, Marco
2013-01-01
It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.
Analysis of electrophoresis performance
NASA Technical Reports Server (NTRS)
Roberts, Glyn O.
1988-01-01
A flexible efficient computer code is being developed to simulate electrophoretic separation phenomena, in either a cylindrical or a rectangular geometry. The code will computer the evolution in time of the concentrations of an arbitrary number of chemical species, and of the temperature, pH distribution, conductivity, electric field, and fluid motion. Use of nonuniform meshes and fast accurate implicit time-stepping will yield accurate answers at economical cost.
Dual neutral particle induced transmutation in CINDER2008
NASA Astrophysics Data System (ADS)
Martin, W. J.; de Oliveira, C. R. E.; Hecht, A. A.
2014-12-01
Although nuclear transmutation methods for fission have existed for decades, the focus has been on neutron-induced reactions. Recent novel concepts have sought to use both neutrons and photons for purposes such as active interrogation of cargo to detect the smuggling of highly enriched uranium, a concept that would require modeling the transmutation caused by both incident particles. As photonuclear transmutation has yet to be modeled alongside neutron-induced transmutation in a production code, new methods need to be developed. The CINDER2008 nuclear transmutation code from Los Alamos National Laboratory is extended from neutron applications to dual neutral particle applications, allowing both neutron- and photon-induced reactions for this modeling with a focus on fission. Following standard reaction modeling, the induced fission reaction is understood as a two-part reaction, with an entrance channel to the excited compound nucleus, and an exit channel from the excited compound nucleus to the fission fragmentation. Because photofission yield data-the exit channel from the compound nucleus-are sparse, neutron fission yield data are used in this work. With a different compound nucleus and excitation, the translation to the excited compound state is modified, as appropriate. A verification and validation of these methods and data has been performed. This has shown that the translation of neutron-induced fission product yield sets, and their use in photonuclear applications, is appropriate, and that the code has been extended correctly.
Perkins, S.P.; Sophocleous, M.
1999-01-01
We developed a model code to simulate a watershed's hydrology and the hydraulic response of an interconnected stream-aquifer system, and applied the model code to the Lower Republican River Basin in Kansas. The model code links two well-known computer programs: MODFLOW (modular 3-D flow model), which simulates ground water flow and stream-aquifer interaction; and SWAT (soil water assessment tool), a soil water budget simulator for an agricultural watershed. SWAT represents a basin as a collection of subbasins in terms of soil, land use, and weather data, and simulates each subbasin on a daily basis to determine runoff, percolation, evaporation, irrigation, pond seepages and crop growth. Because SWAT applies a lumped hydrologic model to each subbasin, spatial heterogeneities with respect to factors such as soil type and land use are not resolved geographically, but can instead be represented statistically. For the Republican River Basin model, each combination of six soil types and three land uses, referred to as a hydrologic response unit (HRU), was simulated with a separate execution of SWAT. A spatially weighted average was then taken over these results for each hydrologic flux and time step by a separate program, SWBAVG. We wrote a package for MOD-FLOW to associate each subbasin with a subset of aquifer grid cells and stream reaches, and to distribute the hydrologic fluxes given for each subbasin by SWAT and SWBAVG over MODFLOW's stream-aquifer grid to represent tributary flow, surface and ground water diversions, ground water recharge, and evapotranspiration from ground water. The Lower Republican River Basin model was calibrated with respect to measured ground water levels, streamflow, and reported irrigation water use. The model was used to examine the relative contributions of stream yield components and the impact on stream yield and base flow of administrative measures to restrict irrigation water use during droughts. Model results indicate that tributary flow is the dominant component of stream yield and that reduction of irrigation water use produces a corresponding increase in base flow and stream yield. However, the increase in stream yield resulting from reduced water use does not appear to be of sufficient magnitude to restore minimum desirable streamflows.
Cross sections and differential spectra for reactions of 2-20 MeV neutrons of /sup 27/Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blann, M.; Komoto, T.T.
1988-01-01
This report summarizes product yields, secondary n,p and ..cap alpha.. spectra, and ..gamma..-ray spectra calculated for incident neutrons of 2-20 MeV on /sup 27/Al targets. Results are all from the code ALICE, using the version ALISO which does weighting of results for targets which are a mix of isotopes. Where natural isotopic targets are involved, yields and n,p,..cap alpha.. spectra will be reported weighted over isotopic yields. Gamma-ray spectra, however, will be reported for the most abundant isotope.
Conducting Retrospective Ontological Clinical Trials in ICD-9-CM in the Age of ICD-10-CM.
Venepalli, Neeta K; Shergill, Ardaman; Dorestani, Parvaneh; Boyd, Andrew D
2014-01-01
To quantify the impact of International Classification of Disease 10th Revision Clinical Modification (ICD-10-CM) transition in cancer clinical trials by comparing coding accuracy and data discontinuity in backward ICD-10-CM to ICD-9-CM mapping via two tools, and to develop a standard ICD-9-CM and ICD-10-CM bridging methodology for retrospective analyses. While the transition to ICD-10-CM has been delayed until October 2015, its impact on cancer-related studies utilizing ICD-9-CM diagnoses has been inadequately explored. Three high impact journals with broad national and international readerships were reviewed for cancer-related studies utilizing ICD-9-CM diagnoses codes in study design, methods, or results. Forward ICD-9-CM to ICD-10-CM mapping was performing using a translational methodology with the Motif web portal ICD-9-CM conversion tool. Backward mapping from ICD-10-CM to ICD-9-CM was performed using both Centers for Medicare and Medicaid Services (CMS) general equivalence mappings (GEMs) files and the Motif web portal tool. Generated ICD-9-CM codes were compared with the original ICD-9-CM codes to assess data accuracy and discontinuity. While both methods yielded additional ICD-9-CM codes, the CMS GEMs method provided incomplete coverage with 16 of the original ICD-9-CM codes missing, whereas the Motif web portal method provided complete coverage. Of these 16 codes, 12 ICD-9-CM codes were present in 2010 Illinois Medicaid data, and accounted for 0.52% of patient encounters and 0.35% of total Medicaid reimbursements. Extraneous ICD-9-CM codes from both methods (Centers for Medicare and Medicaid Services general equivalent mapping [CMS GEMs, n = 161; Motif web portal, n = 246]) in excess of original ICD-9-CM codes accounted for 2.1% and 2.3% of total patient encounters and 3.4% and 4.1% of total Medicaid reimbursements from the 2010 Illinois Medicare database. Longitudinal data analyses post-ICD-10-CM transition will require backward ICD-10-CM to ICD-9-CM coding, and data comparison for accuracy. Researchers must be aware that all methods for backward coding are not comparable in yielding original ICD-9-CM codes. The mandated delay is an opportunity for organizations to better understand areas of financial risk with regards to data management via backward coding. Our methodology is relevant for all healthcare-related coding data, and can be replicated by organizations as a strategy to mitigate financial risk.
Gilmore-Bykovskyi, Andrea L
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. A computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were acceptable to participants. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. Published by Elsevier Inc.
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
A test of the validity of the motivational interviewing treatment integrity code.
Forsberg, Lars; Berman, Anne H; Kallmén, Håkan; Hermansson, Ulric; Helgason, Asgeir R
2008-01-01
To evaluate the Swedish version of the Motivational Interviewing Treatment Code (MITI), MITI coding was applied to tape-recorded counseling sessions. Construct validity was assessed using factor analysis on 120 MITI-coded sessions. Discriminant validity was assessed by comparing MITI coding of motivational interviewing (MI) sessions with information- and advice-giving sessions as well as by comparing MI-trained practitioners with untrained practitioners. A principal-axis factoring analysis yielded some evidence for MITI construct validity. MITI differentiated between practitioners with different levels of MI training as well as between MI practitioners and advice-giving counselors, thus supporting discriminant validity. MITI may be used as a training tool together with supervision to confirm and enhance MI practice in clinical settings. MITI can also serve as a tool for evaluating MI integrity in clinical research.
Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules.
Han, M; Gao, X; Su, J Z; Nie, S
2001-07-01
Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots (zinc sulfide-capped cadmium selenide nanocrystals) into polymeric microbeads at precisely controlled ratios. Their novel optical properties (e.g., size-tunable emission and simultaneous excitation) render these highly luminescent quantum dots (QDs) ideal fluorophores for wavelength-and-intensity multiplexing. The use of 10 intensity levels and 6 colors could theoretically code one million nucleic acid or protein sequences. Imaging and spectroscopic measurements indicate that the QD-tagged beads are highly uniform and reproducible, yielding bead identification accuracies as high as 99.99% under favorable conditions. DNA hybridization studies demonstrate that the coding and target signals can be simultaneously read at the single-bead level. This spectral coding technology is expected to open new opportunities in gene expression studies, high-throughput screening, and medical diagnostics.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, R. V.; Chen, Y.; Wan, W.
2013-10-15
A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less
NASA Astrophysics Data System (ADS)
Lasa, Ane; Safi, Elnaz; Nordlund, Kai
2015-11-01
Recent experiments and Molecular Dynamics (MD) simulations show erosion rates of Be exposed to deuterium (D) plasma varying with surface temperature and the correlated D concentration. Little is understood how these three parameters relate for Be surfaces, despite being essential for reliable prediction of impurity transport and plasma facing material lifetime in current (JET) and future (ITER) devices. A multi-scale exercise is presented here to relate Be surface temperatures, concentrations and sputtering yields. Kinetic Monte Carlo (MC) code MMonCa is used to estimate equilibrium D concentrations in Be at different temperatures. Then, mixed Be-D surfaces - that correspond to the KMC profiles - are generated in MD, to calculate Be-D molecular erosion yields due to D irradiation. With this new database implemented in the 3D MC impurity transport code ERO, modeling scenarios studying wall erosion, such as RF-induced enhanced limiter erosion or main wall surface temperature scans run at JET, can be revisited with higher confidence. Work supported by U.S. DOE under Contract DE-AC05-00OR22725.
NASA Astrophysics Data System (ADS)
Satoh, D.; Kajimoto, T.; Shigyo, N.; Itashiki, Y.; Imabayashi, Y.; Koba, Y.; Matsufuji, N.; Sanami, T.; Nakao, N.; Uozumi, Y.
2016-11-01
Double-differential neutron yields from a water phantom bombarded with 290-MeV/nucleon and 430-MeV/nucleon carbon ions were measured at emission angles of 15°, 30°, 45°, 60°, 75°, and 90°, and angular distributions of neutron yields and doses around the phantom were obtained. The experimental data were compared with results of the Monte-Carlo simulation code PHITS. The PHITS results showed good agreement with the measured data. On the basis of the PHITS simulation, we estimated the angular distributions of neutron yields and doses from 0° to 180° including thermal neutrons.
Rotsch, David A; Brown, M Alex; Nolen, Jerry A; Brossard, Thomas; Henning, Walter F; Chemerisov, Sergey D; Gromov, Roman G; Greene, John
2018-01-01
The photonuclear production of no-carrier-added (NCA) 47 Sc from solid Nat TiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48 Ti(γ,p) 47 Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2 SO 4 in the presence of Na 2 SO 4 and 47 Sc was purified using the commercially available Eichrom DGA resin. Typical 47 Sc recovery yields were >90% with excellent specific activity for small batches (<185 MBq batches). Copyright © 2017 Elsevier Ltd. All rights reserved.
Electron linear accelerator production and purification of scandium-47 from titanium dioxide targets
Rotsch, David A.; Brown, M. Alex; Nolen, Jerry A.; ...
2017-11-06
Here, the photonuclear production of no-carrier-added (NCA) 47Sc from solid NatTiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48Ti(γ,p) 47Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2SO 4 in the presence of Na 2SO 4 and 47Sc was purified using the commercially available Eichrom DGA resin. Typical 47Sc recovery yields were >90%more » with excellent specific activity for small batches (<185 MBq batches).« less
Electron linear accelerator production and purification of scandium-47 from titanium dioxide targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotsch, David A.; Brown, M. Alex; Nolen, Jerry A.
Here, the photonuclear production of no-carrier-added (NCA) 47Sc from solid NatTiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48Ti(γ,p) 47Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2SO 4 in the presence of Na 2SO 4 and 47Sc was purified using the commercially available Eichrom DGA resin. Typical 47Sc recovery yields were >90%more » with excellent specific activity for small batches (<185 MBq batches).« less
Ignition and combustion characteristics of metallized propellants
NASA Technical Reports Server (NTRS)
Mueller, D. C.; Turns, Stephen R.
1992-01-01
During this reporting period, theoretical work on the secondary atomization process was continued and the experimental apparatus was improved. A one-dimensional model of a rocket combustor, incorporating multiple droplet size classes, slurry combustion, secondary atomization, radiation heat transfer, and two-phase slip between slurry droplets and the gas flow was derived and a computer code was written to implement this model. The STANJAN chemical equilibrium solver was coupled with this code to yield gas temperature, density, and composition as functions of axial location. Preliminary results indicate that the model is performing correctly, given current model assumptions. Radiation heat transfer in the combustion chamber is treated as an optically-thick participating media problem requiring a solution of the radiative transfer equation. A cylindrical P sub 1 approximation was employed to yield an analytical expression for chamber-wall heat flux at each axial location. The code exercised to determine the effects of secondary atomization intensity, defined as the number of secondary drops produced per initial drop, on chamber burnout distance and final Al2O3 agglomerate diameter. These results indicate that only weak secondary atomization is required to significantly reduce these two parameters. Stronger atomization intensities were found to yield decreasing marginal benefits. The experimental apparatus was improved to reduce building vibration effects on the optical system alignment. This was accomplished by mounting the burner and the transmitting/receiving optics on a single frame supported by vibration-isolation legs. Calibration and shakedown tests indicate that vibration problems were eliminated and that the system is performing correctly.
Amino acid fermentation at the origin of the genetic code.
de Vladar, Harold P
2012-02-10
There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments into a proto-code that optimises the energetic yield. Monte Carlo simulations are performed to evaluate the establishment of these simple proto-codes, based on amino acid substitutions and codon swapping. In all cases, donor amino acids are assigned to anticodons composed of U+G, and have low redundancy (1-2 codons), whereas acceptor amino acids are assigned to the the remaining codons. These bioenergetic and structural constraints allow for a metabolic role for amino acids before their co-option as catalyst cofactors.
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
Code development for ships -- A demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayyub, B.; Mansour, A.E.; White, G.
1996-12-31
A demonstration summary of a reliability-based structural design code for ships is presented for two ship types, a cruiser and a tanker. For both ship types, code requirements cover four failure modes: hull girder bulking, unstiffened plate yielding and buckling, stiffened plate buckling, and fatigue of critical detail. Both serviceability and ultimate limit states are considered. Because of limitation on the length, only hull girder modes are presented in this paper. Code requirements for other modes will be presented in future publication. A specific provision of the code will be a safety check expression. The design variables are to bemore » taken at their nominal values, typically values in the safe side of the respective distributions. Other safety check expressions for hull girder failure that include load combination factors, as well as consequence of failure factors, are considered. This paper provides a summary of safety check expressions for the hull girder modes.« less
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
NASA Astrophysics Data System (ADS)
Lou, Tak Pui; Ludewigt, Bernhard
2015-09-01
The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.
Schroeder, H; Hoeltken, A M; Fladung, M
2012-03-01
Within the genus Populus several species belonging to different sections are cross-compatible. Hence, high numbers of interspecies hybrids occur naturally and, additionally, have been artificially produced in huge breeding programmes during the last 100 years. Therefore, determination of a single poplar species, used for the production of 'multi-species hybrids' is often difficult, and represents a great challenge for the use of molecular markers in species identification. Within this study, over 20 chloroplast regions, both intergenic spacers and coding regions, have been tested for their ability to differentiate different poplar species using 23 already published barcoding primer combinations and 17 newly designed primer combinations. About half of the published barcoding primers yielded amplification products, whereas the new primers designed on the basis of the total sequenced cpDNA genome of Populus trichocarpa Torr. & Gray yielded much higher amplification success. Intergenic spacers were found to be more variable than coding regions within the genus Populus. The highest discrimination power of Populus species was found in the combination of two intergenic spacers (trnG-psbK, psbK-psbl) and the coding region rpoC. In barcoding projects, the coding regions matK and rbcL are often recommended, but within the genus Populus they only show moderate variability and are not efficient in species discrimination. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
QR Codes in Education and Communication
ERIC Educational Resources Information Center
Durak, Gurhan; Ozkeskin, E. Emre; Ataizi, Murat
2016-01-01
Technological advances brought applications of innovations to education. Conventional education increasingly flourishes with new technologies accompanied by more learner active environments. In this continuum, there are learners preferring self-learning. Traditional learning materials yield attractive, motivating and technologically enhanced…
Decay heat uncertainty quantification of MYRRHA
NASA Astrophysics Data System (ADS)
Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den
2017-09-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
NASA Astrophysics Data System (ADS)
Cai, Libing; Wang, Jianguo; Zhu, Xiangqin; Wang, Yue; Zhang, Dianhui
2015-01-01
Based on the secondary electron emission avalanche (SEEA) model, the SEEA discharge on the vacuum insulator surface is simulated by using a 2D PIC-MCC code developed by ourselves. The evolutions of the number of discharge electrons, insulator surface charge, current, and 2D particle distribution are obtained. The effects of the strength of the applied electric field, secondary electron yield coefficient, rise time of the pulse, length of the insulator on the discharge are investigated. The results show that the number of the SEEA electrons presents a quadratic dependence upon the applied field strength. The SEEA current, which is on the order of Ampere, is directly proportional to the field strength and secondary electron yield coefficient. Finally, the electron-stimulated outgassing is included in the simulation code, and a three-phase discharge curve is presented by the simulation, which agrees with the experimental data.
NASA Astrophysics Data System (ADS)
Azoulay, M.; George, M. A.; Burger, A.; Collins, W. E.; Silberman, E.
A two-dimensional bounce averaged Fokker-Planck code is used to study the fusion yield and the wave absorption by residual hydrogen ions in higher harmonic ICRF heating of D (120 keV) and 3He (80 keV) beams in the JT-60U tokamak. Both for the fourth harmonic resonance of 3He (ω = 4ωc3He(0), which is accompanied by the third harmonic resonance of hydrogen (ω = 3ωcH) at the low field side, and for the third harmonic resonance of 3He (ω = 4ωcD(0) = 3ωc3He(0)) = 2ωcH(0)), a few per cent of hydrogen ions are found to absorb a large fraction of the ICRF power and to degrade the fusion output power. In the latter case, D beam acceleration due to the fourth harmonic resonance in the 3He(D) regime can enhance the fusion yield more effectively. A discussion is given of the effect of D beam acceleration due to the fifth harmonic resonance (ω = 5ωcD) at the high field side in the case of ω = 4ωc3He(0) and of the optimization of the fusion yield in the case of lower electron density and higher electron temperature
A Caltech MURI Center for Quantum Networks
2006-05-31
the code. Thus the dimension of the code space is n5uPfAu5detD , ~64! where PfA denotes the Pfaffian, the square root of the deter- minant of the...material properties, such as bulk ab- sorption and surface scattering. However, as one moves to very small spheres with radius a&10 mm, the intrinsic...1550 nm, which yields a quality factor of Qbulk;3.8310 11. The quality factor due to surface scattering Qs.s. and ab- sorption by adsorbed water Qw has
Fission Activities of the Nuclear Reactions Group in Uppsala
NASA Astrophysics Data System (ADS)
Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Jansson, K.; Koning, A.; Lantz, M.; Mattera, A.; Prokofiev, A. V.; Rakopoulos, V.; Sjöstrand, H.; Solders, A.; Tarrío, D.; Österlund, M.; Pomp, S.
This paper highlights some of the main activities related to fission of the nuclear reactions group at Uppsala University. The group is involved for instance in fission yield experiments at the IGISOL facility, cross-section measurements at the NFS facility, as well as fission dynamics studies at the IRMM JRC-EC. Moreover, work is ongoing on the Total Monte Carlo (TMC) methodology and on including the GEF fission code into the TALYS nuclear reaction code. Selected results from these projects are discussed.
Environmental Mapping by a HERO-1 Robot Using Sonar and a Laser Barcode Scanner.
1983-12-01
can be labled with an x-y type coordinate grid allowing the rover to directly read * its location as it moves along. A different approach is to...uses a two-dimensional grid of two character barcodes as reference objects. Since bar codes are designed to be read in either of two orientations (top...Processing Laboratory at AFIT (see Appendix B for listing). Navigation grid codes consist of two digits running consecutively from 00 to FF, yielding 256
Numerical simulation of the early-time high altitude electromagnetic pulse
NASA Astrophysics Data System (ADS)
Meng, Cui; Chen, Yu-Sheng; Liu, Shun-Kun; Xie, Qin-Chuan; Chen, Xiang-Yue; Gong, Jian-Cheng
2003-12-01
In this paper, the finite difference method is used to develop the Fortran software MCHII. The physical process in which the electromagnetic signal is generated by the interaction of nuclear-explosion-induced Compton currents with the geomagnetic field is numerically simulated. The electromagnetic pulse waveforms below the burst point are investigated. The effects of the height of burst, yield and the time-dependence of gamma-rays are calculated by using the MCHII code. The results agree well with those obtained by using the code CHAP.
Tensile strength/yield strength (TS/YS) ratios of high-strength steel (HSS) reinforcing bars
NASA Astrophysics Data System (ADS)
Tavio, Anggraini, Retno; Raka, I. Gede Putu; Agustiar
2018-05-01
The building codes such as American Concrete Institute (ACI) 318M-14 and Standard National Indonesia (SNI) 2847:2013 require that the ratio of tensile strength (TS) and yield strength (YS) should not less than 1.25. The requirement is based on the assumption that a capability of a structural member to develop inelastic rotation capacity is a function of the length of the yield region. This paper reports an investigation on various steel grades, namely Grades 420, 550, 650, and 700 MPa, to examine the impact of different TS/YS ratios if it is less or greater than the required value. Grades 550, 650, and 700 MPa were purposely selected with the intention to examine if these higher grades are still promising to be implemented in special structural systems since they are prohibited by the building codes for longitudinal reinforcement, whereas Grade 420 MPa bars are the maximum limit of yield strength of reinforcing bars that is allowable for longitudinal reinforcement of special structural systems. Tensile tests of these steel samples were conducted under displacement controlled mode to capture the complete stress-strain curves and particularly the post-yield response of the steel bars. From the study, it can be concluded that Grade 420 performed higher TS/YS ratios and they were able to reach up to more than 1.25. However, the High Strength Still (HSS) bars (Grades 550, 600, and 700 MPa) resulted in lower TS/YS ratios (less than 1.25) compared with those of Grade 420 MPa.
Aerodynamic and heat transfer analysis of the low aspect ratio turbine
NASA Astrophysics Data System (ADS)
Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.
1987-06-01
The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.
Duchemin, C; Guertin, A; Haddad, F; Michel, N; Métivier, V
2015-09-07
HIGHLIGHTS • Production of Sc-44 m, Sc-44 g and contaminants. • Experimental values determined using the stacked-foil technique. • Thick-Target production Yield (TTY) calculations. • Comparison with the TALYS code version 1.6.Among the large number of radionuclides of medical interest, Sc-44 is promising for PET imaging. Either the ground-state Sc-44 g or the metastable-state Sc-44 m can be used for such applications, depending on the molecule used as vector. This study compares the production rates of both Sc-44 states, when protons or deuterons are used as projectiles on an enriched Calcium-44 target. This work presents the first set of data for the deuteron route. The results are compared with the TALYS code. The Thick-Target production Yields of Sc-44 m and Sc-44 g are calculated and compared with those for the proton route for three different scenarios: the production of Sc-44 g for conventional PET imaging, its production for the new 3 γ imaging technique developed at the SUBATECH laboratory and the production of a Sc-44 m/Sc-44 g in vivo generator for antibody labelling.
NASA Astrophysics Data System (ADS)
Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2004-06-01
Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Chen, Yongzhong; Wang, Baoming; Chen, Jianjun; Wang, Xiangnan; Wang, Rui; Peng, Shaofeng; Chen, Longsheng; Ma, Li; Luo, Jian
2015-01-01
Tea oil derived from seeds of Camellia oleifera Abel. is high-quality edible oil in China. This study isolated full-length cDNAs of Rubisco subunits rbcL and rbcS from C. oleifera. The rbcL has 1,522 bp with a 1,425 bp coding region, encoding 475 amino acids; and the rbcS has 615 bp containing a 528 bp coding region, encoding 176 amino acids. The expression level of the two genes, designated as Co-rbcL and Co-rbcS, was determined in three C. oleifera cultivars: Hengchong 89, Xianglin 1, and Xianglin 14 whose annual oil yields were 546.9, 591.4, and 657.7 kg ha(-1), respectively. The Co-rbcL expression in 'Xianglin 14' was significantly higher than 'Xianglin 1', and 'Xianglin 1' was greater than 'Hengchong 89'. The expression levels of Co-rbcS in 'Xianglin 1' and 'Xianglin 14' were similar but were significantly greater than in 'Hengchong 89'. The net photosynthetic rate of 'Xianglin 14' was significantly higher than 'Xianglin 1', and 'Xianglin 1' was higher than 'Hengchong 89'. Pearson's correlation analysis showed that seed yields and oil yields were highly correlated with the expression level of Co-rbcL at P < 0.001 level; and the expression of Co-rbcS was correlated with oil yield at P < 0.01 level. Net photosynthetic rate was also correlated with oil yields and seed yields at P < 0.001 and P < 0.01 levels, respectively. Our results suggest that Co-rbcS and Co-rbcL in particular could potentially be molecular markers for early selection of high oil yield cultivars. In combination with the measurement of net photosynthetic rates, the early identification of potential high oil production cultivars would significantly shorten plant breeding time and increase breeding efficiency.
Optimizing Dense Plasma Focus Neutron Yields with Fast Gas Jets
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Kueny, Christopher; Stein, Elizabeth; Link, Anthony; Schmidt, Andrea
2016-10-01
We report a study using the particle-in-cell code LSP to perform fully kinetic simulations modeling dense plasma focus (DPF) devices with high density gas jets on axis. The high density jet models fast gas puffs which allow for more mass on axis while maintaining the optimal pressure for the DPF. As the density of the jet compared to the background fill increases we find the neutron yield increases, as does the variability in the neutron yield. Introducing perturbations in the jet density allow for consistent seeding of the m =0 instability leading to more consistent ion acceleration and higher neutron yields with less variability. Jets with higher on axis density are found to have the greatest yield. The optimal jet configuration is explored. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Poem: A Fast Monte Carlo Code for the Calculation of X-Ray Transition Zone Dose and Current
1975-01-15
stored on the photon interaction data tape. Following the photoelectric ionization the atom will relax emitting either a fluorescent photon or an Auger 50...shell fluorescence yield CL have been obtained from the Storm and Israel1 9 and 25 Bambynek, et al. compilations, with preference given to the...Bambynek compilation, and stored on the photon inter- action data tape. The mean M fluorescence yield wM is approximated by zero. The total electron source
Source terms, shielding calculations and soil activation for a medical cyclotron.
Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E
2016-12-01
Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .
A multi-institution evaluation of clinical profile anonymization
Heatherly, Raymond; Rasmussen, Luke V; Peissig, Peggy L; Pacheco, Jennifer A; Harris, Paul; Denny, Joshua C
2016-01-01
Background and objective: There is an increasing desire to share de-identified electronic health records (EHRs) for secondary uses, but there are concerns that clinical terms can be exploited to compromise patient identities. Anonymization algorithms mitigate such threats while enabling novel discoveries, but their evaluation has been limited to single institutions. Here, we study how an existing clinical profile anonymization fares at multiple medical centers. Methods: We apply a state-of-the-art k-anonymization algorithm, with k set to the standard value 5, to the International Classification of Disease, ninth edition codes for patients in a hypothyroidism association study at three medical centers: Marshfield Clinic, Northwestern University, and Vanderbilt University. We assess utility when anonymizing at three population levels: all patients in 1) the EHR system; 2) the biorepository; and 3) a hypothyroidism study. We evaluate utility using 1) changes to the number included in the dataset, 2) number of codes included, and 3) regions generalization and suppression were required. Results: Our findings yield several notable results. First, we show that anonymizing in the context of the entire EHR yields a significantly greater quantity of data by reducing the amount of generalized regions from ∼15% to ∼0.5%. Second, ∼70% of codes that needed generalization only generalized two or three codes in the largest anonymization. Conclusions: Sharing large volumes of clinical data in support of phenome-wide association studies is possible while safeguarding privacy to the underlying individuals. PMID:26567325
The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS
Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...
2015-04-22
The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
General phase spaces: from discrete variables to rotor and continuum limits
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.
2017-12-01
We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
Label consistent K-SVD: learning a discriminative dictionary for recognition.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2013-11-01
A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengcheng; Mcclure, Mark; Shiozawa, Sogo
A series of experiments performed at the Fenton Hill hot dry rock site after stage 2 drilling of Phase I reservoir provided intriguing field observations on the reservoir’s responses to injection and venting under various conditions. Two teams participating in the US DOE Geothermal Technologies Office (GTO)’s Code Comparison Study (CCS) used different numerical codes to model these five experiments with the objective of inferring the hydraulic stimulation mechanism involved. The codes used by the two teams are based on different numerical principles, and the assumptions made were also different, due to intrinsic limitations in the codes and the modelers’more » personal interpretations of the field observations. Both sets of models were able to produce the most important field observations and both found that it was the combination of the vertical gradient of the fracture opening pressure, injection volume, and the use/absence of proppant that yielded the different outcomes of the five experiments.« less
NESSY: NLTE spectral synthesis code for solar and stellar atmospheres
NASA Astrophysics Data System (ADS)
Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.
2017-07-01
Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.
ERIC Educational Resources Information Center
Whitney, Tim
2000-01-01
Examines how tight urban sites can yield sports spaces that favorably compare to their more rural campus counterparts. Potential areas of concern when recreation centers are reconfigured into high-rise structures are highlighted, including building codes, building access, noise control, building costs, and lighting. (GR)
Amino acid fermentation at the origin of the genetic code
2012-01-01
There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments into a proto-code that optimises the energetic yield. Monte Carlo simulations are performed to evaluate the establishment of these simple proto-codes, based on amino acid substitutions and codon swapping. In all cases, donor amino acids are assigned to anticodons composed of U+G, and have low redundancy (1-2 codons), whereas acceptor amino acids are assigned to the the remaining codons. These bioenergetic and structural constraints allow for a metabolic role for amino acids before their co-option as catalyst cofactors. Reviewers: this article was reviewed by Prof. William Martin, Prof. Eörs Szathmáry (nominated by Dr. Gáspár Jékely) and Dr. Ádám Kun (nominated by Dr. Sandor Pongor) PMID:22325238
10Gbps 2D MGC OCDMA Code over FSO Communication System
NASA Astrophysics Data System (ADS)
Professor Urmila Bhanja, Associate, Dr.; Khuntia, Arpita; Alamasety Swati, (Student
2017-08-01
Currently, wide bandwidth signal dissemination along with low latency is a leading requisite in various applications. Free space optical wireless communication has introduced as a realistic technology for bridging the gap in present high data transmission fiber connectivity and as a provisional backbone for rapidly deployable wireless communication infrastructure. The manuscript highlights on the implementation of 10Gbps SAC-OCDMA FSO communications using modified two dimensional Golomb code (2D MGC) that possesses better auto correlation, minimum cross correlation and high cardinality. A comparison based on pseudo orthogonal (PSO) matrix code and modified two dimensional Golomb code (2D MGC) is developed in the proposed SAC OCDMA-FSO communication module taking different parameters into account. The simulative outcome signifies that the communication radius is bounded by the multiple access interference (MAI). In this work, a comparison is made in terms of bit error rate (BER), and quality factor (Q) based on modified two dimensional Golomb code (2D MGC) and PSO matrix code. It is observed that the 2D MGC yields better results compared to the PSO matrix code. The simulation results are validated using optisystem version 14.
Gilmore-Bykovskyi, Andrea L.
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. In order to enable identification of potential antecedents to mealtime behavioral symptoms, a computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the acceptability and feasibility of procedures for video-capturing naturally-occurring mealtime interactions between caregivers and residents with dementia, to assess the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were feasible and acceptable to caregivers, residents and their legally authorized representatives. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. PMID:25784080
An Overview of the XGAM Code and Related Software for Gamma-ray Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.
2014-11-13
The XGAM spectrum-fitting code and associated software were developed specifically to analyze the complex gamma-ray spectra that can result from neutron-induced reactions. The XGAM code is designed to fit a spectrum over the entire available gamma-ray energy range as a single entity, in contrast to the more traditional piecewise approaches. This global-fit philosophy enforces background continuity as well as consistency between local and global behavior throughout the spectrum, and in a natural way. This report presents XGAM and the suite of programs built around it with an emphasis on how they fit into an overall analysis methodology for complex gamma-raymore » data. An application to the analysis of time-dependent delayed gamma-ray yields from 235U fission is shown in order to showcase the codes and how they interact.« less
Fusion PIC code performance analysis on the Cori KNL system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian
We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less
Spallation yield of neutrons produced in thick lead target bombarded with 250 MeV protons
NASA Astrophysics Data System (ADS)
Chen, L.; Ma, F.; Zhanga, X. Y.; Ju, Y. Q.; Zhang, H. B.; Ge, H. L.; Wang, J. G.; Zhou, B.; Li, Y. Y.; Xu, X. W.; Luo, P.; Yang, L.; Zhang, Y. B.; Li, J. Y.; Xu, J. K.; Liang, T. J.; Wang, S. L.; Yang, Y. W.; Gu, L.
2015-01-01
The neutron yield from thick target of Pb irradiated with 250 MeV protons has been studied experimentally. The neutron production was measured with the water-bath gold method. The thermal neutron distributions in the water were determined according to the measured activities of Au foils. Corresponding results calculated with the Monte Carlo code MCNPX were compared with the experimental data. It was found out that the Au foils with cadmium cover significantly changed the spacial distribution of the thermal neutron field. The corrected neutron yield was deduced to be 2.23 ± 0.19 n/proton by considering the influence of the Cd cover on the thermal neutron flux.
1993-10-06
1975) was also used to determine cooked yields from raw ingredients and appropriate USDA processing codes were selected from the CAN System to estimate...be assumed that there were some losses of the heat labile vitamins (particularly thiamin and vitamin C). While the USDA processing codes provided for...cups Li 1-2 cups Li Less than 1 cup Li Don’t know 29. I consume... H Coffee H Decaffeinated Coffee L Kool-Aid Cola - Diet Cola Decaffeinated Cola
Kinetic neoclassical calculations of impurity radiation profiles
Stotler, D. P.; Battaglia, D. J.; Hager, R.; ...
2016-12-30
Modifications of the drift-kinetic transport code XGC0 to include the transport, ionization, and recombination of individual charge states, as well as the associated radiation, are described. The code is first applied to a simulation of an NSTX H-mode discharge with carbon impurity to demonstrate the approach to coronal equilibrium. The effects of neoclassical phenomena on the radiated power profile are examined sequentially through the activation of individual physics modules in the code. Orbit squeezing and the neoclassical inward pinch result in increased radiation for temperatures above a few hundred eV and changes to the ratios of charge state emissions atmore » a given electron temperature. As a result, analogous simulations with a neon impurity yield qualitatively similar results.« less
Dual-camera design for coded aperture snapshot spectral imaging.
Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng
2015-02-01
Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.
Srivastava, Rishi; Singh, Mohar; Bajaj, Deepak; Parida, Swarup K.
2016-01-01
Development and large-scale genotyping of user-friendly informative genome/gene-derived InDel markers in natural and mapping populations is vital for accelerating genomics-assisted breeding applications of chickpea with minimal resource expenses. The present investigation employed a high-throughput whole genome next-generation resequencing strategy in low and high pod number parental accessions and homozygous individuals constituting the bulks from each of two inter-specific mapping populations [(Pusa 1103 × ILWC 46) and (Pusa 256 × ILWC 46)] to develop non-erroneous InDel markers at a genome-wide scale. Comparing these high-quality genomic sequences, 82,360 InDel markers with reference to kabuli genome and 13,891 InDel markers exhibiting differentiation between low and high pod number parental accessions and bulks of aforementioned mapping populations were developed. These informative markers were structurally and functionally annotated in diverse coding and non-coding sequence components of genome/genes of kabuli chickpea. The functional significance of regulatory and coding (frameshift and large-effect mutations) InDel markers for establishing marker-trait linkages through association/genetic mapping was apparent. The markers detected a greater amplification (97%) and intra-specific polymorphic potential (58–87%) among a diverse panel of cultivated desi, kabuli, and wild accessions even by using a simpler cost-efficient agarose gel-based assay implicating their utility in large-scale genetic analysis especially in domesticated chickpea with narrow genetic base. Two high-density inter-specific genetic linkage maps generated using aforesaid mapping populations were integrated to construct a consensus 1479 InDel markers-anchored high-resolution (inter-marker distance: 0.66 cM) genetic map for efficient molecular mapping of major QTLs governing pod number and seed yield per plant in chickpea. Utilizing these high-density genetic maps as anchors, three major genomic regions harboring each of pod number and seed yield robust QTLs (15–28% phenotypic variation explained) were identified on chromosomes 2, 4, and 6. The integration of genetic and physical maps at these QTLs mapped on chromosomes scaled-down the long major QTL intervals into high-resolution short pod number and seed yield robust QTL physical intervals (0.89–2.94 Mb) which were essentially got validated in multiple genetic backgrounds of two chickpea mapping populations. The genome-wide InDel markers including natural allelic variants and genomic loci/genes delineated at major six especially in one colocalized novel congruent robust pod number and seed yield robust QTLs mapped on a high-density consensus genetic map were found most promising in chickpea. These functionally relevant molecular tags can drive marker-assisted genetic enhancement to develop high-yielding cultivars with increased seed/pod number and yield in chickpea. PMID:27695461
NASA Technical Reports Server (NTRS)
Jaffe, Richard L.; Pattengill, Merle D.; Schwenke, David W.
1989-01-01
Strategies for constructing global potential energy surfaces from a limited number of accurate ab initio electronic energy calculations are discussed. Generally, these data are concentrated in small regions of configuration space (e.g., in the vicinity of saddle points and energy minima) and difficulties arise in generating a potential function that is globally well-behaved. Efficient computer codes for carrying out classical trajectory calculations on vector and parallel processors are also described. Illustrations are given from recent work on the following chemical systems: Ca + HF yields CaF + H, H + H + H2 yields H2 + H2, N + O2 yields NO + O and O + N2 yields NO + N. The dynamics and kinetics of metathesis, dissociation, recombination, energy transfer and complex formation processes will be discussed.
The PARTRAC code: Status and recent developments
NASA Astrophysics Data System (ADS)
Friedland, Werner; Kundrat, Pavel
Biophysical modeling is of particular value for predictions of radiation effects due to manned space missions. PARTRAC is an established tool for Monte Carlo-based simulations of radiation track structures, damage induction in cellular DNA and its repair [1]. Dedicated modules describe interactions of ionizing particles with the traversed medium, the production and reactions of reactive species, and score DNA damage determined by overlapping track structures with multi-scale chromatin models. The DNA repair module describes the repair of DNA double-strand breaks (DSB) via the non-homologous end-joining pathway; the code explicitly simulates the spatial mobility of individual DNA ends in parallel with their processing by major repair enzymes [2]. To simulate the yields and kinetics of radiation-induced chromosome aberrations, the repair module has been extended by tracking the information on the chromosome origin of ligated fragments as well as the presence of centromeres [3]. PARTRAC calculations have been benchmarked against experimental data on various biological endpoints induced by photon and ion irradiation. The calculated DNA fragment distributions after photon and ion irradiation reproduce corresponding experimental data and their dose- and LET-dependence. However, in particular for high-LET radiation many short DNA fragments are predicted below the detection limits of the measurements, so that the experiments significantly underestimate DSB yields by high-LET radiation [4]. The DNA repair module correctly describes the LET-dependent repair kinetics after (60) Co gamma-rays and different N-ion radiation qualities [2]. First calculations on the induction of chromosome aberrations have overestimated the absolute yields of dicentrics, but correctly reproduced their relative dose-dependence and the difference between gamma- and alpha particle irradiation [3]. Recent developments of the PARTRAC code include a model of hetero- vs euchromatin structures to enable accounting for variations in DNA damage yields, complexity and repair between these regions. Second, the applicability of the code to low-energy ions has been extended to full stopping by using a modified Barkas scaling of proton cross sections for ions heavier than helium. Third, ongoing studies aim at hitherto unprecedented benchmarking of the code against experiments with sub-muµm focused bunches of low-LET ions mimicking single high-LET ion tracks [5] which separate effects of damage clustering on a sub-mum scale from DNA damage complexity on a nanometer scale. Fourth, motivated by implications for the involvement of mitochondria in intercellular signaling and radiation-induced bystander effects, ongoing work extends the range of PARTRAC DNA models to radiation effects on mitochondrial DNA. The contribution will discuss the PARTRAC modules, benchmarks to experimental data, recent and ongoing developments of the code, with special attention to its implications and potential applications in radiation protection and space research. Acknowledgement. This work was partially funded by the EU (Contract FP7-249689 ‘DoReMi’). References 1. Friedland et al., Mutat. Res. 711, 28 (2011) 2. Friedland et al., Int. J. Radiat. Biol. 88, 129 (2012) 3. Friedland et al., Mutat. Res. 756, 213 (2013) 4. Alloni et al., Radiat. Res. 179, 690 (2013) 5. Schmid et al., Phys. Med. Biol. 57, 5889 (2012)
Causal Reasoning with Mental Models
2014-08-08
The initial rubric is equivalent to an exclusive disjunction between the two causal assertions. It 488 yields the following two mental models: 489...are 575 important, whereas the functions of artifacts are important (Ahn, 1998). A genetic code is 576 accordingly more critical to being a goat than
Damage-plasticity model of the host rock in a nuclear waste repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koudelka, Tomáš; Kruis, Jaroslav, E-mail: kruis@fsv.cvut.cz
The paper describes damage-plasticity model for the modelling of the host rock environment of a nuclear waste repository. Radioactive Waste Repository Authority in Czech Republic assumes the repository to be in a granite rock mass which exhibit anisotropic behaviour where the strength in tension is lower than in compression. In order to describe this phenomenon, the damage-plasticity model is formulated with the help of the Drucker-Prager yield criterion which can be set to capture the compression behaviour while the tensile stress states is described with the help of scalar isotropic damage model. The concept of damage-plasticity model was implemented inmore » the SIFEL finite element code and consequently, the code was used for the simulation of the Äspö Pillar Stability Experiment (APSE) which was performed in order to determine yielding strength under various conditions in similar granite rocks as in Czech Republic. The results from the performed analysis are presented and discussed in the paper.« less
Ab-initio Calculation of the XANES of Lithium Phosphates and LiFePO4
NASA Astrophysics Data System (ADS)
Yiu, Y. M.; Yang, Songlan; Wang, Dongniu; Sun, Xueliang; Sham, T. K.
2013-04-01
Lithium iron phosphate has been regarded as a promising cathode material for the next generation lithium ion batteries due to its high specific capacity, superior thermal and cyclic stability [1]. In this study, the XANES (X-ray Absorption Near Edge Structure) spectra of lithium iron phosphate and lithium phosphates of various compositions at the Li K, P L3,2, Fe M3,2 and O K-edges have been simulated self-consistently using ab-initio calculations based on multiple scattering theory (the FEFF9 code) and DFT (Density Functional Theory, the Wien2k code). The lithium phosphates under investigation include LiFePO4, γ-Li3PO4, Li4P2O7 and LiPO3. The calculated spectra are compared to the experimental XANES recorded in total electron yield (TEY) and fluorescence yield (FLY). This work was carried out to assess the XANES of possible phases presented in LiFePO4 based Li ion battery applications [2].
Validation of a Laser-Ray Package in an Eulerian Code
NASA Astrophysics Data System (ADS)
Bradley, Paul; Hall, Mike; McKenty, Patrick; Collins, Tim; Keller, David
2014-10-01
A laser-ray absorption package was recently installed in the RAGE code by the Laboratory for Laser Energetics (LLE). In this presentation, we describe our use of this package to implode Omega 60 beam symmetric direct drive capsules. The capsules have outer diameters of about 860 microns, CH plastic shell thicknesses between 8 and 32 microns, DD or DT gas fills between 5 and 20 atmospheres, and a 1 ns square pulse of 23 to 27 kJ. These capsule implosions were previously modeled with a calibrated energy source in the outer layer of the capsule, where we matched bang time and burn ion temperature well, but the simulated yields were two to three times higher than the data. We will run simulations with laser ray energy deposition to the experiments and the results to the yield and spectroscopic data. Work performed by Los Alamos National Laboratory under Contract DE-AC52-06NA25396 for the National Nuclear Security Administration of the U.S. Department of Energy.
A multi-institution evaluation of clinical profile anonymization.
Heatherly, Raymond; Rasmussen, Luke V; Peissig, Peggy L; Pacheco, Jennifer A; Harris, Paul; Denny, Joshua C; Malin, Bradley A
2016-04-01
There is an increasing desire to share de-identified electronic health records (EHRs) for secondary uses, but there are concerns that clinical terms can be exploited to compromise patient identities. Anonymization algorithms mitigate such threats while enabling novel discoveries, but their evaluation has been limited to single institutions. Here, we study how an existing clinical profile anonymization fares at multiple medical centers. We apply a state-of-the-artk-anonymization algorithm, withkset to the standard value 5, to the International Classification of Disease, ninth edition codes for patients in a hypothyroidism association study at three medical centers: Marshfield Clinic, Northwestern University, and Vanderbilt University. We assess utility when anonymizing at three population levels: all patients in 1) the EHR system; 2) the biorepository; and 3) a hypothyroidism study. We evaluate utility using 1) changes to the number included in the dataset, 2) number of codes included, and 3) regions generalization and suppression were required. Our findings yield several notable results. First, we show that anonymizing in the context of the entire EHR yields a significantly greater quantity of data by reducing the amount of generalized regions from ∼15% to ∼0.5%. Second, ∼70% of codes that needed generalization only generalized two or three codes in the largest anonymization. Sharing large volumes of clinical data in support of phenome-wide association studies is possible while safeguarding privacy to the underlying individuals. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.
2002-01-01
Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.
Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet
NASA Technical Reports Server (NTRS)
Koch, L. Danielle; Bridges, James; Khavaran, Abbas
2002-01-01
To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.
Kivisalu, Trisha M; Lewey, Jennifer H; Shaffer, Thomas W; Canfield, Merle L
2016-01-01
The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.
Bayesian Analogy with Relational Transformations
ERIC Educational Resources Information Center
Lu, Hongjing; Chen, Dawn; Holyoak, Keith J.
2012-01-01
How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy…
Brotherhood and College Latinos: A Phenomenological Study
ERIC Educational Resources Information Center
Estrada, Fernando; Mejia, Araceli; Hufana, Alyssa Mae
2017-01-01
An understudied topic is the social experiences of college Latinos. In this study, six men shared their experience of brotherhood or "hermandad". Individual interviews yielded qualitative data that were subjected to inductive coding resulting in seven descriptive themes conveying the essence of brotherhood. The findings and implications…
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
History of the Nuclei Important for Cosmochemistry
NASA Technical Reports Server (NTRS)
Meyer, Bradley S.
2004-01-01
An essential aspect of studying the nuclei important for cosmochemistry is their production in stars. Over the grant period, we have further developed the Clemson/American University of Beirut stellar evolution code. Through use of a biconjugate-gradient matrix solver, we now routinely solve l0(exp 6) x l0(exp 6) sparse matrices on our desktop computers. This has allowed us to couple nucleosynthesis and convection fully in the 1-D star, which, in turn, provides better estimates of nuclear yields when the mixing and nuclear burning timescales are comparable. We also have incorporated radiation transport into our 1-D supernova explosion code. We used the stellar evolution and explosion codes to compute iron abundances in a 25 Solar mass star and compared the results to data from RIMS.
O'Leary, Mel; Boscolo, Daria; Breslin, Nicole; Brown, Jeremy M C; Dolbnya, Igor P; Emerson, Chris; Figueira, Catarina; Fox, Oliver J L; Grimes, David Robert; Ivosev, Vladimir; Kleppe, Annette K; McCulloch, Aaron; Pape, Ian; Polin, Chris; Wardlow, Nathan; Currell, Fred J
2018-03-16
Absolute measurements of the radiolytic yield of Fe3+ in a ferrous sulphate dosimeter formulation (6 mM Fe2+), with a 20 keV x-ray monoenergetic beam, are reported. Dose-rate suppression of the radiolytic yield was observed at dose rates lower than and different in nature to those previously reported with x-rays. We present evidence that this effect is most likely to be due to recombination of free radicals radiolytically produced from water. The method used to make these measurements is also new and it provides radiolytic yields which are directly traceable to the SI standards system. The data presented provides new and exacting tests of radiation chemistry codes.
Adding kinetics and hydrodynamics to the CHEETAH thermochemical code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L.E., Howard, W.M., Souers, P.C.
1997-01-15
In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. Wemore » have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.« less
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Chen, Yongzhong; Wang, Baoming; Chen, Jianjun; Wang, Xiangnan; Wang, Rui; Peng, Shaofeng; Chen, Longsheng; Ma, Li; Luo, Jian
2015-01-01
Tea oil derived from seeds of Camellia oleifera Abel. is high-quality edible oil in China. This study isolated full-length cDNAs of Rubisco subunits rbcL and rbcS from C. oleifera. The rbcL has 1,522 bp with a 1,425 bp coding region, encoding 475 amino acids; and the rbcS has 615 bp containing a 528 bp coding region, encoding 176 amino acids. The expression level of the two genes, designated as Co-rbcL and Co-rbcS, was determined in three C. oleifera cultivars: Hengchong 89, Xianglin 1, and Xianglin 14 whose annual oil yields were 546.9, 591.4, and 657.7 kg ha-1, respectively. The Co-rbcL expression in ‘Xianglin 14’ was significantly higher than ‘Xianglin 1’, and ‘Xianglin 1’ was greater than ‘Hengchong 89’. The expression levels of Co-rbcS in ‘Xianglin 1’ and ‘Xianglin 14’ were similar but were significantly greater than in ‘Hengchong 89’. The net photosynthetic rate of ‘Xianglin 14’ was significantly higher than ‘Xianglin 1’, and ‘Xianglin 1’ was higher than ‘Hengchong 89’. Pearson’s correlation analysis showed that seed yields and oil yields were highly correlated with the expression level of Co-rbcL at P < 0.001 level; and the expression of Co-rbcS was correlated with oil yield at P < 0.01 level. Net photosynthetic rate was also correlated with oil yields and seed yields at P < 0.001 and P < 0.01 levels, respectively. Our results suggest that Co-rbcS and Co-rbcL in particular could potentially be molecular markers for early selection of high oil yield cultivars. In combination with the measurement of net photosynthetic rates, the early identification of potential high oil production cultivars would significantly shorten plant breeding time and increase breeding efficiency. PMID:25873921
Baskar, Gurunathan; Sathya, Shree Rajesh K Lakshmi Jai; Jinnah, Riswana Begum; Sahadevan, Renganathan
2011-01-01
Response surface methodology was employed to optimize the concentration of four important cultivation media components such as cottonseed oil cake, glucose, NH4Cl, and MgSO4 for maximum medicinal polysaccharide yield by Lingzhi or Reishi medicinal mushroom, Ganoderma lucidum MTCC 1039 in submerged culture. The second-order polynomial model describing the relationship between media components and polysaccharide yield was fitted in coded units of the variables. The higher value of the coefficient of determination (R2 = 0.953) justified an excellent correlation between media components and polysaccharide yield, and the model fitted well with high statistical reliability and significance. The predicted optimum concentration of the media components was 3.0% cottonseed oil cake, 3.0% glucose, 0.15% NH4Cl, and 0.045% MgSO4, with the maximum predicted polysaccharide yield of 819.76 mg/L. The experimental polysaccharide yield at the predicted optimum media components was 854.29 mg/L, which was 4.22% higher than the predicted yield.
Predicted Exoplanet Yields for the HabEx Mission Concept
NASA Astrophysics Data System (ADS)
Stark, Christopher; Mennesson, Bertrand; HabEx STDT
2018-01-01
The Habitable Exoplanet Imaging Mission (HabEx) is a concept for a flagship mission to directly image and characterize extrasolar planets around nearby stars and to enable a broad range of general astrophysics. The HabEx Science and Technology Definition Team (STDT) is currently studying two architectures for HabEx. Here we summarize the exoplanet science yield of Architecture A, a 4 m monolithic off-axis telescope that uses a vortex coronagraph and a 72m external starshade occulter. We summarize the instruments' capabilities, present science goals and observation strategies, and discuss astrophysical assumptions. Using a yield optimization code, we predict the yield of potentially Earth-like extrasolar planets that could be detected, characterized, and searched for signs of habitability and/or life by HabEx. We demonstrate that HabEx could also detect and characterize a wide variety of exoplanets while searching for potentially Earth-like planets.
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Bennett, R. M.
1982-01-01
Since the aerodynamic theory is nonlinear, the method requires the coupling of two iterative processes - an aerodynamic analysis and a structural analysis. A full potential analysis code, FLO22, is combined with a linear structural analysis to yield aerodynamic load distributions on and deflections of elastic wings. This method was used to analyze an aeroelastically-scaled wind tunnel model of a proposed executive-jet transport wing and an aeroelastic research wing. The results are compared with the corresponding rigid-wing analyses, and some effects of elasticity on the aerodynamic loading are noted.
Augmenting Qualitative Text Analysis with Natural Language Processing: Methodological Study.
Guetterman, Timothy C; Chang, Tammy; DeJonckheere, Melissa; Basu, Tanmay; Scruggs, Elizabeth; Vydiswaran, V G Vinod
2018-06-29
Qualitative research methods are increasingly being used across disciplines because of their ability to help investigators understand the perspectives of participants in their own words. However, qualitative analysis is a laborious and resource-intensive process. To achieve depth, researchers are limited to smaller sample sizes when analyzing text data. One potential method to address this concern is natural language processing (NLP). Qualitative text analysis involves researchers reading data, assigning code labels, and iteratively developing findings; NLP has the potential to automate part of this process. Unfortunately, little methodological research has been done to compare automatic coding using NLP techniques and qualitative coding, which is critical to establish the viability of NLP as a useful, rigorous analysis procedure. The purpose of this study was to compare the utility of a traditional qualitative text analysis, an NLP analysis, and an augmented approach that combines qualitative and NLP methods. We conducted a 2-arm cross-over experiment to compare qualitative and NLP approaches to analyze data generated through 2 text (short message service) message survey questions, one about prescription drugs and the other about police interactions, sent to youth aged 14-24 years. We randomly assigned a question to each of the 2 experienced qualitative analysis teams for independent coding and analysis before receiving NLP results. A third team separately conducted NLP analysis of the same 2 questions. We examined the results of our analyses to compare (1) the similarity of findings derived, (2) the quality of inferences generated, and (3) the time spent in analysis. The qualitative-only analysis for the drug question (n=58) yielded 4 major findings, whereas the NLP analysis yielded 3 findings that missed contextual elements. The qualitative and NLP-augmented analysis was the most comprehensive. For the police question (n=68), the qualitative-only analysis yielded 4 primary findings and the NLP-only analysis yielded 4 slightly different findings. Again, the augmented qualitative and NLP analysis was the most comprehensive and produced the highest quality inferences, increasing our depth of understanding (ie, details and frequencies). In terms of time, the NLP-only approach was quicker than the qualitative-only approach for the drug (120 vs 270 minutes) and police (40 vs 270 minutes) questions. An approach beginning with qualitative analysis followed by qualitative- or NLP-augmented analysis took longer time than that beginning with NLP for both drug (450 vs 240 minutes) and police (390 vs 220 minutes) questions. NLP provides both a foundation to code qualitatively more quickly and a method to validate qualitative findings. NLP methods were able to identify major themes found with traditional qualitative analysis but were not useful in identifying nuances. Traditional qualitative text analysis added important details and context. ©Timothy C Guetterman, Tammy Chang, Melissa DeJonckheere, Tanmay Basu, Elizabeth Scruggs, VG Vinod Vydiswaran. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2018.
USDA-ARS?s Scientific Manuscript database
Higher-level relationships within the Lepidoptera, and particularly within the species-rich subclade Ditrysia, are generally not well understood, although recent studies have yielded progress. 483 taxa spanning 115 of 124 families were sampled for 19 protein-coding nuclear genes. Their aligned nucle...
Development of Switchable Polarity Solvent Draw Solutes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Aaron D.
Results of a computational fluid dynamic (CFD) study of flow and heat transfer in a printed circuit heat exchanger (PCHE) geometry are presented. CFD results obtained from a two-plate model are compared to corresponding experimental results for the validation. This process provides the basis for further application of the CFD code to PCHE design and performance analysis in a variety of internal flow geometries. As a part of the code verification and validation (V&V) process, CFD simulation of a single semicircular straight channel under laminar isothermal conditions was also performed and compared to theoretical results. This comparison yielded excellent agreementmore » with the theoretical values. The two-plate CFD model based on the experimental PCHE design overestimated the effectiveness and underestimated the pressure drop. However, it is found that the discrepancy between the CFD result and experimental data was mainly caused by the uncertainty in the geometry of heat exchanger during the fabrication. The CFD results obtained using a slightly smaller channel diameter yielded good agreement with the experimental data. A separate investigation revealed that the average channel diameter of the OSU PCHE after the diffusion-bonding was 1.93 mm on the cold fluid side and 1.90 mm on the hot fluid side which are both smaller than the nominal design value. Consequently, the CFD code was shown to have sufficient capability to evaluate the heat exchanger thermal-hydraulic performance.« less
Simulation studies of chemical erosion on carbon based materials at elevated temperatures
NASA Astrophysics Data System (ADS)
Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.
1999-06-01
We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.
AutoBayes Program Synthesis System Users Manual
NASA Technical Reports Server (NTRS)
Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd
2008-01-01
Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Space-time encoding for high frame rate ultrasound imaging.
Misaridis, Thanassis X; Jensen, Jørgen A
2002-05-01
Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Shi-Hoon; Kim, Dae-Wan; Yang, Hoe-Seok
Planar anisotropy and cup-drawing behavior were investigated for high-strength steel sheets containing different volume fractions of martensite. Macrotexture analysis using XRD was conducted to capture the effect of crystallographic orientation on the planar anisotropy of high-strength steel sheets. A phenomenological yield function, Yld96, which accounts for the anisotropy of yield stress and r-values, was implemented into ABAQUS using the user subroutine UMAT. Cup drawing of high-strength steel sheets was simulated using the FEM code. The profiles of earing and thickness strain were compared with the experimentally measured results.
NASA Astrophysics Data System (ADS)
Davis, S. J.; Egolf, T. A.
1980-07-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
Residential building codes, affordability, and health protection: a risk-tradeoff approach.
Hammitt, J K; Belsky, E S; Levy, J I; Graham, J D
1999-12-01
Residential building codes intended to promote health and safety may produce unintended countervailing risks by adding to the cost of construction. Higher construction costs increase the price of new homes and may increase health and safety risks through "income" and "stock" effects. The income effect arises because households that purchase a new home have less income remaining for spending on other goods that contribute to health and safety. The stock effect arises because suppression of new-home construction leads to slower replacement of less safe housing units. These countervailing risks are not presently considered in code debates. We demonstrate the feasibility of estimating the approximate magnitude of countervailing risks by combining the income effect with three relatively well understood and significant home-health risks. We estimate that a code change that increases the nationwide cost of constructing and maintaining homes by $150 (0.1% of the average cost to build a single-family home) would induce offsetting risks yielding between 2 and 60 premature fatalities or, including morbidity effects, between 20 and 800 lost quality-adjusted life years (both discounted at 3%) each year the code provision remains in effect. To provide a net health benefit, the code change would need to reduce risk by at least this amount. Future research should refine these estimates, incorporate quantitative uncertainty analysis, and apply a full risk-tradeoff approach to real-world case studies of proposed code changes.
Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.
System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less
From Theory to Practice: Measuring end-of-life communication quality using multiple goals theory.
Van Scoy, L J; Scott, A M; Reading, J M; Chuang, C H; Chinchilli, V M; Levi, B H; Green, M J
2017-05-01
To describe how multiple goals theory can be used as a reliable and valid measure (i.e., coding scheme) of the quality of conversations about end-of-life issues. We analyzed conversations from 17 conversations in which 68 participants (mean age=51years) played a game that prompted discussion in response to open-ended questions about end-of-life issues. Conversations (mean duration=91min) were audio-recorded and transcribed. Communication quality was assessed by three coders who assigned numeric scores rating how well individuals accomplished task, relational, and identity goals in the conversation. The coding measure, which results in a quantifiable outcome, yielded strong reliability (intra-class correlation range=0.73-0.89 and Cronbach's alpha range=0.69-0.89 for each of the coded domains) and validity (using multilevel nonlinear modeling, we detected significant variability in scores between games for each of the coded domains, all p-values <0.02). Our coding scheme provides a theory-based measure of end-of-life conversation quality that is superior to other methods of measuring communication quality. Our description of the coding method enables researches to adapt and apply this measure to communication interventions in other clinical contexts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
Relative fission product yield determination in the USGS TRIGA Mark I reactor
NASA Astrophysics Data System (ADS)
Koehl, Michael A.
Fission product yield data sets are one of the most important and fundamental compilations of basic information in the nuclear industry. This data has a wide range of applications which include nuclear fuel burnup and nonproliferation safeguards. Relative fission yields constitute a major fraction of the reported yield data and reduce the number of required absolute measurements. Radiochemical separations of fission products reduce interferences, facilitate the measurement of low level radionuclides, and are instrumental in the analysis of low-yielding symmetrical fission products. It is especially useful in the measurement of the valley nuclides and those on the extreme wings of the mass yield curve, including lanthanides, where absolute yields have high errors. This overall project was conducted in three stages: characterization of the neutron flux in irradiation positions within the U.S. Geological Survey TRIGA Mark I Reactor (GSTR), determining the mass attenuation coefficients of precipitates used in radiochemical separations, and measuring the relative fission products in the GSTR. Using the Westcott convention, the Westcott flux, modified spectral index, neutron temperature, and gold-based cadmium ratios were determined for various sampling positions in the USGS TRIGA Mark I reactor. The differential neutron energy spectrum measurement was obtained using the computer iterative code SAND-II-SNL. The mass attenuation coefficients for molecular precipitates were determined through experiment and compared to results using the EGS5 Monte Carlo computer code. Difficulties associated with sufficient production of fission product isotopes in research reactors limits the ability to complete a direct, experimental assessment of mass attenuation coefficients for these isotopes. Experimental attenuation coefficients of radioisotopes produced through neutron activation agree well with the EGS5 calculated results. This suggests mass attenuation coefficients of molecular precipitates can be approximated using EGS5, especially in the instance of radioisotopes produced predominantly through uranium fission. Relative fission product yields were determined for three sampling positions in the USGS TRIGA Mark I reactor through radiochemical analysis. The relative mass yield distribution for valley nuclides decreases with epithermal neutrons compared to thermal neutrons. Additionally, a proportionality constant which related the measured beta activity of a fission product to the number of fissions that occur in a sample of irradiated uranium was determined for the detector used in this study and used to determine the thermal and epithermal flux. These values agree well with a previous study which used activation foils to determine the flux. The results of this project clearly demonstrate that R-values can be measured in the GSTR.
21st Century Cyber Security: Legal Authorities and Requirements
2012-03-22
Cyber warfare has risen to the level of strategic effect. Exigent threats in cyberspace are a critical U.S. strategic vulnerability for which U.S...operations cross many sections of United States Code. But, they have not yielded a genuine whole-of-government approach. This SRP argues that cyber warfare has
Efficient Bit-to-Symbol Likelihood Mappings
NASA Technical Reports Server (NTRS)
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
ERIC Educational Resources Information Center
Cunningham, Charles E.; Rimas, Heather; Mielko, Stephanie; Mapp, Cailin; Cunningham, Lesley; Buchanan, Don; Vaillancourt, Tracy; Chen, Yvonne; Deal, Ken; Marcus, Madalyn
2016-01-01
Prevention programs yield modest reductions in bullying in North American schools. This study explored the perspective of educators regarding factors limiting the impact of these initiatives. Transcripts from nineteen 90-min focus groups with 103 educators were coded thematically. Educators felt that off-site incidents, cyberbullying, and the…
NASA Astrophysics Data System (ADS)
Woo, K. M.; Betti, R.; Shvarts, D.; Bose, A.; Patel, D.; Yan, R.; Chang, P.-Y.; Mannion, O. M.; Epstein, R.; Delettrez, J. A.; Charissis, M.; Anderson, K. S.; Radha, P. B.; Shvydky, A.; Igumenshchev, I. V.; Gopalaswamy, V.; Christopherson, A. R.; Sanz, J.; Aluie, H.
2018-05-01
The study of Rayleigh-Taylor instability in the deceleration phase of inertial confinement fusion implosions is carried out using the three-dimensional (3-D) radiation-hydrodynamic Eulerian parallel code DEC3D. We show that the yield-over-clean is a strong function of the residual kinetic energy (RKE) for low modes. Our analytical models indicate that the behavior of larger hot-spot volumes observed in low modes and the consequential pressure degradation can be explained in terms of increasing the RKE. These results are derived using a simple adiabatic implosion model of the deceleration phase as well as through an extensive set of 3-D single-mode simulations using the code DEC3D. The effect of the bulk velocity broadening on ion temperature asymmetries is analyzed for different mode numbers ℓ=1 -12. The jet observed in low mode ℓ=1 is shown to cause the largest ion temperature variation in the mode spectrum. The vortices of high modes within the cold bubbles are shown to cause lower ion temperature variations than low modes.
Hourly simulation of a Ground-Coupled Heat Pump system
NASA Astrophysics Data System (ADS)
Naldi, C.; Zanchini, E.
2017-01-01
In this paper, we present a MATLAB code for the hourly simulation of a whole Ground-Coupled Heat Pump (GCHP) system, based on the g-functions previously obtained by Zanchini and Lazzari. The code applies both to on-off heat pumps and to inverter-driven ones. It is employed to analyse the effects of the inverter and of the total length of the Borehole Heat Exchanger (BHE) field on the mean seasonal COP (SCOP) and on the mean seasonal EER (SEER) of a GCHP system designed for a residential house with 6 apartments in Bologna, North-Center Italy, with dominant heating loads. A BHE field with 3 in line boreholes is considered, with length of each BHE either 75 m or 105 m. The results show that the increase of the BHE length yields a SCOP enhancement of about 7%, while the SEER remains nearly unchanged. The replacement of the on-off heat pump by an inverter-driven one yields a SCOP enhancement of about 30% and a SEER enhancement of about 50%. The results demonstrate the importance of employing inverter-driven heat pumps for GCHP systems.
Analysis of direct-drive capsule compression experiments on the Iskra-5 laser facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gus'kov, S. Yu.; Demchenko, N. N.; Zhidkov, N. V.
2010-09-15
We have analyzed and numerically simulated our experiments on the compression of DT-gas-filled glass capsules under irradiation by a small number of beams on the Iskra-5 facility (12 beams) at the second harmonic of an iodine laser ({lambda} = 0.66 {mu}m) for a laser pulse energy of 2 kJ and duration of 0.5 ns in the case of asymmetric irradiation and compression. Our simulations include the construction of a target illumination map and a histogram of the target surface illumination distribution; 1D capsule compression simulations based on the DIANA code corresponding to various target surface regions; and 2D compression simulationsmore » based on the NUTCY code corresponding to the illumination conditions. We have succeeded in reproducing the shape of the compressed region at the time of maximum compression and the reduction in neutron yield (compared to the 1D simulations) to the experimentally observed values. For the Iskra-5 conditions, we have considered targets that can provide a more symmetric compression and a higher neutron yield.« less
Castro-Chavez, Fernando
2012-01-01
Background Three binary representations of the genetic code according to the ancient I Ching of Fu-Xi will be presented, depending on their defragging capabilities by pairing based on three biochemical properties of the nucleic acids: H-bonds, Purine/Pyrimidine rings, and the Keto-enol/Amino-imino tautomerism, yielding the last pair a 32/32 single-strand self-annealed genetic code and I Ching tables. Methods Our working tool is the ancient binary I Ching's resulting genetic code chromosomes defragged by vertical and by horizontal pairing, reverse engineered into non-binaries of 2D rotating 4×4×4 circles and 8×8 squares and into one 3D 100% symmetrical 16×4 tetrahedron coupled to a functional tetrahedron with apical signaling and central hydrophobicity (codon formula: 4[1(1)+1(3)+1(4)+4(2)]; 5:5, 6:6 in man) forming a stella octangula, and compared to Nirenberg's 16×4 codon table (1965) pairing the first two nucleotides of the 64 codons in axis y. Results One horizontal and one vertical defragging had the start Met at the center. Two, both horizontal and vertical pairings produced two pairs of 2×8×4 genetic code chromosomes naturally arranged (M and I), rearranged by semi-introversion of central purines or pyrimidines (M' and I') and by clustering hydrophobic amino acids; their quasi-identity was disrupted by amino acids with odd codons (Met and Tyr pairing to Ile and TGA Stop); in all instances, the 64-grid 90° rotational ability was restored. Conclusions We defragged three I Ching representations of the genetic code while emphasizing Nirenberg's historical finding. The synthetic genetic code chromosomes obtained reflect the protective strategy of enzymes with a similar function, having both humans and mammals a biased G-C dominance of three H-bonds in the third nucleotide of their most used codons per amino acid, as seen in one chromosome of the i, M and M' genetic codes, while a two H-bond A-T dominance was found in their complementary chromosome, as seen in invertebrates and plants. The reverse engineering of chromosome I' into 2D rotating circles and squares was undertaken, yielding a 100% symmetrical 3D geometry which was coupled to a previously obtained genetic code tetrahedron in order to differentiate the start methionine from the methionine that is acting as a codifying non-start codon. PMID:23431415
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pigni, M.T., E-mail: pignimt@ornl.gov; Francis, M.W.; Gauld, I.C.
A recent implementation of ENDF/B-VII.1 independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear schemes in the decay sub-library that are not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that do not agree with the cumulative fission yields in the library as well as with experimental measurements. To address these issues, a comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron-induced fission for {supmore » 235,238}U and {sup 239,241}Pu in order to provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to compare the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. Another important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library for stable and long-lived fission products. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less
Toburen, L. H.; McLawhorn, S. L.; McLawhorn, R. A.; Carnes, K. D.; Dingfelder, M.; Shinpaugh, J. L.
2013-01-01
Absolute doubly differential electron emission yields were measured from thin films of amorphous solid water (ASW) after the transmission of 6 MeV protons and 19 MeV (1 MeV/nucleon) fluorine ions. The ASW films were frozen on thin (1-μm) copper foils cooled to approximately 50 K. Electrons emitted from the films were detected as a function of angle in both the forward and backward direction and as a function of the film thickness. Electron energies were determined by measuring the ejected electron time of flight, a technique that optimizes the accuracy of measuring low-energy electron yields, where the effects of molecular environment on electron transport are expected to be most evident. Relative electron emission yields were normalized to an absolute scale by comparison of the integrated total yields for proton-induced electron emission from the copper substrate to values published previously. The absolute doubly differential yields from ASW are presented along with integrated values, providing single differential and total electron emission yields. These data may provide benchmark tests of Monte Carlo track structure codes commonly used for assessing the effects of radiation quality on biological effectiveness. PMID:20681805
NASA Astrophysics Data System (ADS)
Titarenko, Yu. E.; Batyaev, V. F.; Pavlov, K. V.; Titarenko, A. Yu.; Zhivun, V. M.; Chauzova, M. V.; Balyuk, S. A.; Bebenin, P. V.; Ignatyuk, A. V.; Mashnik, S. G.; Leray, S.; Boudard, A.; David, J. C.; Mancusi, D.; Cugnon, J.; Yariv, Y.; Nishihara, K.; Matsuda, N.; Kumawat, H.; Stankovskiy, A. Yu.
2016-06-01
The paper presents the measured cumulative yields of 44Ti for natCr, 56Fe, natNi and 93Nb samples irradiated by protons at the energy range 0.04-2.6 GeV. The obtained excitation functions are compared with calculations of the well-known codes: ISABEL, Bertini, INCL4.2+ABLA, INCL4.5+ABLA07, PHITS, CASCADE07 and CEM03.02. The predictive power of these codes regarding the studied nuclides is analyzed.
Basic biology and therapeutic implications of lncRNA.
Khorkova, O; Hsiao, J; Wahlestedt, C
2015-06-29
Long non-coding RNAs (lncRNA), a class of non-coding RNA molecules recently identified largely due to the efforts of FANTOM, and later GENCODE and ENCODE consortia, have been a subject of intense investigation in the past decade. Extensive efforts to get deeper understanding of lncRNA biology have yielded evidence of their diverse structural and regulatory roles in protecting chromosome integrity, maintaining genomic architecture, X chromosome inactivation, imprinting, transcription, translation and epigenetic regulation. Here we will briefly review the recent studies in the field of lncRNA biology focusing mostly on mammalian species and discuss their therapeutic implications. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.
2015-03-30
Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.
NASA Astrophysics Data System (ADS)
Nasrabadi, M. N.; Sepiani, M.
2015-03-01
Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.
NASA Technical Reports Server (NTRS)
Lawson, Gary; Poteat, Michael; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2016-01-01
In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23X was measured for MPI+SMPI, but only 10X was measured for MPI+OpenMP.
NASA Astrophysics Data System (ADS)
Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.
2018-04-01
The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Elliptical orbit performance computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates and plots elliptical orbit performance capability of space boosters for presentation purposes is described. Orbital performance capability of space boosters is typically presented as payload weight as a function of perigee and apogee altitudes. The parameters are derived from a parametric computer simulation of the booster flight which yields the payload weight as a function of velocity and altitude at insertion. The process of converting from velocity and altitude to apogee and perigee altitude and plotting the results as a function of payload weight is mechanized with the ELOPE program. The program theory, user instruction, input/output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz C; Ivanov, E.
2015-01-01
The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.
The primary transcriptome of the marine diazotroph Trichodesmium erythraeum IMS101
NASA Astrophysics Data System (ADS)
Pfreundt, Ulrike; Kopf, Matthias; Belkin, Natalia; Berman-Frank, Ilana; Hess, Wolfgang R.
2014-08-01
Blooms of the dinitrogen-fixing marine cyanobacterium Trichodesmium considerably contribute to new nitrogen inputs into tropical oceans. Intriguingly, only 60% of the Trichodesmium erythraeum IMS101 genome sequence codes for protein, compared with ~85% in other sequenced cyanobacterial genomes. The extensive non-coding genome fraction suggests space for an unusually high number of unidentified, potentially regulatory non-protein-coding RNAs (ncRNAs). To identify the transcribed fraction of the genome, here we present a genome-wide map of transcriptional start sites (TSS) at single nucleotide resolution, revealing the activity of 6,080 promoters. We demonstrate that T. erythraeum has the highest number of actively splicing group II introns and the highest percentage of TSS yielding ncRNAs of any bacterium examined to date. We identified a highly transcribed retroelement that serves as template repeat for the targeted mutation of at least 12 different genes by mutagenic homing. Our findings explain the non-coding portion of the T. erythraeum genome by the transcription of an unusually high number of non-coding transcripts in addition to the known high incidence of transposable elements. We conclude that riboregulation and RNA maturation-dependent processes constitute a major part of the Trichodesmium regulatory apparatus.
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
The Scylla Multi-Code Comparison Project
NASA Astrophysics Data System (ADS)
Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team
2016-01-01
Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.
The historical biogeography of Mammalia
Springer, Mark S.; Meredith, Robert W.; Janecka, Jan E.; Murphy, William J.
2011-01-01
Palaeobiogeographic reconstructions are underpinned by phylogenies, divergence times and ancestral area reconstructions, which together yield ancestral area chronograms that provide a basis for proposing and testing hypotheses of dispersal and vicariance. Methods for area coding include multi-state coding with a single character, binary coding with multiple characters and string coding. Ancestral reconstruction methods are divided into parsimony versus Bayesian/likelihood approaches. We compared nine methods for reconstructing ancestral areas for placental mammals. Ambiguous reconstructions were a problem for all methods. Important differences resulted from coding areas based on the geographical ranges of extant species versus the geographical provenance of the oldest fossil for each lineage. Africa and South America were reconstructed as the ancestral areas for Afrotheria and Xenarthra, respectively. Most methods reconstructed Eurasia as the ancestral area for Boreoeutheria, Euarchontoglires and Laurasiatheria. The coincidence of molecular dates for the separation of Afrotheria and Xenarthra at approximately 100 Ma with the plate tectonic sundering of Africa and South America hints at the importance of vicariance in the early history of Placentalia. Dispersal has also been important including the origins of Madagascar's endemic mammal fauna. Further studies will benefit from increased taxon sampling and the application of new ancestral area reconstruction methods. PMID:21807730
The primitive code and repeats of base oligomers as the primordial protein-encoding sequence.
Ohno, S; Epplen, J T
1983-01-01
Even if the prebiotic self-replication of nucleic acids and the subsequent emergence of primitive, enzyme-independent tRNAs are accepted as plausible, the origin of life by spontaneous generation still appears improbable. This is because the just-emerged primitive translational machinery had to cope with base sequences that were not preselected for their coding potentials. Particularly if the primitive mitochondria-like code with four chain-terminating base triplets preceded the universal code, the translation of long, randomly generated, base sequences at this critical stage would have merely resulted in the production of short oligopeptides instead of long polypeptide chains. We present the base sequence of a mouse transcript containing tetranucleotide repeats conserved during evolution. Even if translated in accordance with the primitive mitochondria-like code, this transcript in its three reading frames can yield 245-, 246-, and 251-residue-long tetrapeptidic periodical polypeptides that are already acquiring longer periodicities. We contend that the first set of base sequences translated at the beginning of life were such oligonucleotide repeats. By quickly acquiring longer periodicities, their products must have soon gained characteristic secondary structures--alpha-helical or beta-sheet or both. PMID:6574491
Decoding DNA labels by melting curve analysis using real-time PCR.
Balog, József A; Fehér, Liliána Z; Puskás, László G
2017-12-01
Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.
An adapted yield criterion for the evolution of subsequent yield surfaces
NASA Astrophysics Data System (ADS)
Küsters, N.; Brosius, A.
2017-09-01
In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.
McBee, Morgan P; Laor, Tal; Pryor, Rebecca M; Smith, Rachel; Hardin, Judy; Ulland, Lisa; May, Sally; Zhang, Bin; Towbin, Alexander J
2018-02-01
The purpose of this study was to adapt our radiology reports to provide the documentation required for specific International Classification of Diseases, tenth rev (ICD-10) diagnosis coding. Baseline data were analyzed to identify the reports with the greatest number of unspecified ICD-10 codes assigned by computer-assisted coding software. A two-part quality improvement initiative was subsequently implemented. The first component involved improving clinical histories by utilizing technologists to obtain information directly from the patients or caregivers, which was then imported into the radiologist's report within the speech recognition software. The second component involved standardization of report terminology and creation of four different structured report templates to determine which yielded the fewest reports with an unspecified ICD-10 code assigned by an automated coding engine. In all, 12,077 reports were included in the baseline analysis. Of these, 5,151 (43%) had an unspecified ICD-10 code. The majority of deficient reports were for radiographs (n = 3,197; 62%). Inadequacies included insufficient clinical history provided and lack of detailed fracture descriptions. Therefore, the focus was standardizing terminology and testing different structured reports for radiographs obtained for fractures. At baseline, 58% of radiography reports contained a complete clinical history with improvement to >95% 8 months later. The total number of reports that contained an unspecified ICD-10 code improved from 43% at baseline to 27% at completion of this study (P < .0001). The number of radiology studies with a specific ICD-10 code can be improved through quality improvement methodology, specifically through the use of technologist-acquired clinical histories and structured reporting. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pigni, Marco T; Francis, Matthew W; Gauld, Ian C
A recent implementation of ENDF/B-VII. independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear scheme in the decay sub-library that is not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that are incompatible with the cumulative fission yields in the library, and also with experimental measurements. A comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron induced fission for 235,238U and 239,241Pu in order tomore » provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to evaluate the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. An important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library in the case of stable and long-lived cumulative yields due to the inconsistency of ENDF/B-VII.1 fission p;roduct yield and decay data sub-libraries. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less
Measuring the ionization balance of gold in a low-density plasma of importance to ICF
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M; Beiersdorfer, P; Schneider, M
Charge state distributions (CSDs) have been determined in low density ({approx}10 {sup 12} cm{sup -3}) gold plasmas having either a monoenergetic beam (E{sub Beam} = 2.66, 3.53 and 4.54 keV) or experimentally simulated thermal electron distributions (T{sub e} = 2.0, 2.5 and 3.0 keV). These plasmas were created in the Livermore electron beam ion traps EBIT-I and EBIT-II. Line emission and radiative recombination features of Ni to Kr-like gold ions were recorded in the x-ray region with a crystal spectrometer and a photometrically calibrated microcalorimeter. The CSDs in the experimentally simulated thermal plasmas were inferred by fitting the observed 4f{yields}3dmore » and 5f{yields}3d lines with synthetic spectra from the Hebrew University Lawrence Livermore Atomic Code (HULLAC). Additionally, the CSDs in the beam plasmas were inferred both from fitting the line emission and fitting the radiative recombination emission to calculations from the General Relativistic Atomic Structure Program (GRASP). Despite the relatively simple atomic physics in the low density plasma, differences existed between the experimental CSDs and the simulations from several available codes (e.g. RIGEL). Our experimental CSD relied upon accurate electron impact cross sections provided by HULLAC. To determine their reliability, we have experimentally determined the cross sections for several of the n=3{yields}4 and n=3{yields}5 excitations in Ni to Ga-like Au and compared them to distorted wave calculations. Recent Au spectra recorded during experiments at the HELEN laser facility are presented and compared with those from EBIT-I and EBIT-II.« less
Measuring the Ionization Balance of Gold in a Low-Density Plasma of Importance to ICF
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M.J.; Beiersdorfer, P.; Schneider, M.
Charge state distributions (CSDs) have been determined in low density ({approx_equal}1012 cm-3) gold plasmas having either a monoenergetic beam (EBeam = 2.66, 3.53 and 4.54 keV) or experimentally simulated thermal electron distributions (Te = 2.0, 2.5 and 3.0 keV). These plasmas were created in the Livermore electron beam ion traps EBIT-I and EBIT-II. Line emission and radiative recombination features of Ni to Kr-like gold ions were recorded in the x-ray region with a crystal spectrometer and a photometrically calibrated microcalorimeter. The CSDs in the experimentally simulated thermal plasmas were inferred by fitting the observed 4f{yields}3d and 5f{yields}3d lines with syntheticmore » spectra from the Hebrew University Lawrence Livermore Atomic Code (HULLAC). Additionally, the CSDs in the beam plasmas were inferred both from fitting the line emission and fitting the radiative recombination emission to calculations from the General Relativistic Atomic Structure Program (GRASP). Despite the relatively simple atomic physics in the low density plasma, differences existed between the experimental CSDs and the simulations from several available codes (e.g. RIGEL). Our experimental CSD relied upon accurate electron impact cross sections provided by HULLAC. To determine their reliability, we have experimentally determined the cross sections for several of the n=3{yields}4 and n=3{yields}5 excitations in Ni to Ga-like Au and compared them to distorted wave calculations. Recent Au spectra recorded during experiments at the HELEN laser facility are presented and compared with those from EBIT-I and EB0011IT-.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezghani, Najla; Mnif, Mouna; Mkaouar-Rebai, Emna, E-mail: emna_mkaouar@mail2world.com
Highlights: {yields} We reported a patient with Wolfram syndrome and dilated cardiomyopathy. {yields} We detected the ND1 mitochondrial m.3337G>A mutation in 3 tested tissues (blood leukocytes, buccal mucosa and skeletal muscle). {yields} Long-range PCR amplification revealed the presence of multiple mitochondrial deletions in the skeletal muscle. {yields} The deletions remove several tRNA and protein-coding genes. -- Abstract: Wolfram syndrome (WFS) is a rare hereditary disorder also known as DIDMOAD (diabetes insipidus, diabetes mellitus, optic atrophy, and deafness). It is a heterogeneous disease and full characterization of all clinical and biological features of this disorder is difficult. The wide spectrum ofmore » clinical expression, affecting several organs and tissues, and the similarity in phenotype between patients with Wolfram syndrome and those with certain types of respiratory chain diseases suggests mitochondrial DNA (mtDNA) involvement in Wolfram syndrome patients. We report a Tunisian patient with clinical features of moderate Wolfram syndrome including diabetes, dilated cardiomyopathy and neurological complications. The results showed the presence of the mitochondrial ND1 m.3337G>A mutation in almost homoplasmic form in 3 tested tissues of the proband (blood leukocytes, buccal mucosa and skeletal muscle). In addition, the long-range PCR amplifications revealed the presence of multiple deletions of the mitochondrial DNA extracted from the patient's skeletal muscle removing several tRNA and protein-coding genes. Our study reported a Tunisian patient with clinical features of moderate Wolfram syndrome associated with cardiomyopathy, in whom we detected the ND1 m.3337G>A mutation with mitochondrial multiple deletions.« less
Linard, Joshua I.
2013-01-01
Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.
Weng, Jianfeng; Li, Bo; Liu, Changlin; Yang, Xiaoyan; Wang, Hongwei; Hao, Zhuanfang; Li, Mingshun; Zhang, Degui; Ci, Xiaoke; Li, Xinhai; Zhang, Shihuang
2013-07-05
Kernel weight, controlled by quantitative trait loci (QTL), is an important component of grain yield in maize. Cytokinins (CKs) participate in determining grain morphology and final grain yield in crops. ZmIPT2, which is expressed mainly in the basal transfer cell layer, endosperm, and embryo during maize kernel development, encodes an isopentenyl transferase (IPT) that is involved in CK biosynthesis. The coding region of ZmIPT2 was sequenced across a panel of 175 maize inbred lines that are currently used in Chinese maize breeding programs. Only 16 single nucleotide polymorphisms (SNPs) and seven haplotypes were detected among these inbred lines. Nucleotide diversity (π) within the ZmIPT2 window and coding region were 0.347 and 0.0047, respectively, and they were significantly lower than the mean nucleotide diversity value of 0.372 for maize Chromosome 2 (P < 0.01). Association mapping revealed that a single nucleotide change from cytosine (C) to thymine (T) in the ZmIPT2 coding region, which converted a proline residue into a serine residue, was significantly associated with hundred kernel weight (HKW) in three environments (P <0.05), and explained 4.76% of the total phenotypic variation. In vitro characterization suggests that the dimethylallyl diphospate (DMAPP) IPT activity of ZmIPT2-T is higher than that of ZmIPT2-C, as the amounts of adenosine triphosphate (ATP), adenosine diphosphate (ADP), and adenosine monophosphate (AMP) consumed by ZmIPT2-T were 5.48-, 2.70-, and 1.87-fold, respectively, greater than those consumed by ZmIPT2-C. The effects of artificial selection on the ZmIPT2 coding region were evaluated using Tajima's D tests across six subgroups of Chinese maize germplasm, with the most frequent favorable allele identified in subgroup PB (Partner B). These results showed that ZmIPT2, which is associated with kernel weight, was subjected to artificial selection during the maize breeding process. ZmIPT2-T had higher IPT activity than ZmIPT2-C, and this favorable allele for kernel weight could be used in molecular marker-assisted selection for improvement of grain yield components in Chinese maize breeding programs.
Airborne monitoring of crop canopy temperatures for irrigation scheduling and yield prediction
NASA Technical Reports Server (NTRS)
Millard, J. P.; Jackson, R. D.; Goettelman, R. C.; Reginato, R. J.; Idso, S. B.; Lapado, R. L.
1977-01-01
Airborne and ground measurements were made on April 1 and 29, 1976, over a USDA test site consisting mostly of wheat in various stages of water stress, but also including alfalfa and bare soil. These measurements were made to evaluate the feasibility of measuring crop temperatures from aircraft so that a parameter termed stress degree day, SDD, could be computed. Ground studies have shown that SDD is a valuable indicator of a crop's water needs, and that it can be related to irrigation scheduling and yield. The aircraft measurement program required predawn and afternoon flights coincident with minimum and maximum crop temperatures. Airborne measurements were made with an infrared line scanner and with color IR photography. The scanner data were registered, subtracted, and color-coded to yield pseudo-colored temperature-difference images. Pseudo-colored images reading directly in daily SDD increments were also produced. These maps enable a user to assess plant water status and thus determine irrigation needs and crop yield potentials.
Lee, Hee-Seock; Ban, Syuichi; Sanami, Toshiya; Takahashi, Kazutoshi; Sato, Tatsuhiko; Shin, Kazuo; Chung, Chinwha
2005-01-01
A study of differential photo-neutron yields by irradiation with 2 GeV electrons has been carried out. In this extension of a previous study in which measurements were made at an angle of 90 degrees relative to incident electrons, the differential photo-neutron yield was obtained at two other angles, 48 degrees and 140 degrees, to study its angular characteristics. Photo-neutron spectra were measured using a pulsed beam time-of-flight method and a BC418 plastic scintillator. The reliable range of neutron energy measurement was 8-250 MeV. The neutron spectra were measured for 10 Xo-thick Cu, Sn, W and Pb targets. The angular distribution characteristics, together with the previous results for 90 degrees, are presented in the study. The experimental results are compared with Monte Carlo calculation results. The yields predicted by MCNPX 2.5 tend to underestimate the measured ones. The same trend holds for the comparison results using the EGS4 and PICA3 codes.
NASA Technical Reports Server (NTRS)
Davis, S. J.; Egolf, T. A.
1980-01-01
Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.
NASA Technical Reports Server (NTRS)
Carpenter, M. H.
1988-01-01
The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.
Some practical universal noiseless coding techniques, part 2
NASA Technical Reports Server (NTRS)
Rice, R. F.; Lee, J. J.
1983-01-01
This report is an extension of earlier work (Part 1) which provided practical adaptive techniques for the efficient noiseless coding of a broad class of data sources characterized by only partially known and varying statistics (JPL Publication 79-22). The results here, while still claiming such general applicability, focus primarily on the noiseless coding of image data. A fairly complete and self-contained treatment is provided. Particular emphasis is given to the requirements of the forthcoming Voyager II encounters of Uranus and Neptune. Performance evaluations are supported both graphically and pictorially. Expanded definitions of the algorithms in Part 1 yield a computationally improved set of options for applications requiring efficient performance at entropies above 4 bits/sample. These expanded definitions include as an important subset, a somewhat less efficient but extremely simple "FAST' compressor which will be used at the Voyager Uranus encounter. Additionally, options are provided which enhance performance when atypical data spikes may be present.
NASA Astrophysics Data System (ADS)
Riera-Palou, Felip; den Brinker, Albertus C.
2007-12-01
This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
FPGA acceleration of rigid-molecule docking codes
Sukhwani, B.; Herbordt, M.C.
2011-01-01
Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
The social value of research: interrogating the paradoxes.
Ghoshal, Rakhi
2018-01-01
The relation between science and society is, simply put, very complex. In the history of global bioethics, it is the Code of Nuremberg which foregrounded the acute ways in which biomedical/scientific research could (negatively) impact society; this 1947 Code became the point of reference for subsequent research concerning humans. The Code "required that medical experiments on human beings must have the potential to yield fruitful results for the good of society". The Declaration of Helsinki (DoH), 1964 reinstated this concern by stressing that "clinical research cannot be legitimately carried out unless the risks to participants are justified by the importance of the research" - invoking the idea of the "social value" of research. However, in these initial days, "social value" of research was interpreted more in terms of the moral balance of research, a balance to ensure that the benefits of research unambiguously outweighed its risks as far as its participants were concerned.
Structural Code Considerations for Solar Rooftop Installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred
2014-12-01
Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
Gas stripping and mixing in galaxy clusters: a numerical comparison study
NASA Astrophysics Data System (ADS)
Heß, Steffen; Springel, Volker
2012-11-01
The ambient hot intrahalo gas in clusters of galaxies is constantly fed and stirred by infalling galaxies, a process that can be studied in detail with cosmological hydrodynamical simulations. However, different numerical methods yield discrepant predictions for crucial hydrodynamical processes, leading for example to different entropy profiles in clusters of galaxies. In particular, the widely used Lagrangian smoothed particle hydrodynamics (SPH) scheme is suspected to strongly damp fluid instabilities and turbulence, which are both crucial to establish the thermodynamic structure of clusters. In this study, we test to which extent our recently developed Voronoi particle hydrodynamics (VPH) scheme yields different results for the stripping of gas out of infalling galaxies and for the bulk gas properties of cluster. We consider both the evolution of isolated galaxy models that are exposed to a stream of intracluster medium or are dropped into cluster models, as well as non-radiative cosmological simulations of cluster formation. We also compare our particle-based method with results obtained with a fundamentally different discretization approach as implemented in the moving-mesh code AREPO. We find that VPH leads to noticeably faster stripping of gas out of galaxies than SPH, in better agreement with the mesh-code than with SPH. We show that despite the fact that VPH in its present form is not as accurate as the moving mesh code in our investigated cases, its improved accuracy of gradient estimates makes VPH an attractive alternative to SPH.
Conservative multizonal interface algorithm for the 3-D Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Molvik, G. A.
1991-01-01
A conservative zonal interface algorithm using features of both structured and unstructured mesh CFD technology is presented. The flow solver within each of the zones is based on structured mesh CFD technology. The interface algorithm was implemented into two three-dimensional Navier-Stokes finite volume codes and was found to yield good results.
ERIC Educational Resources Information Center
Namaghi, Seyyed Ali Ostovar; Moghaddam, Mohammad Reza Saboor; Tajzad, Maryam
2014-01-01
The purpose of this study is to explore language teachers' perspectives on Iranian third grade senior high school EFL textbook, which is prescribed by the Ministry of Education. In data collection and analysis, the researchers used theoretical sampling and the coding schemes presented in grounded theory. Final analysis yielded "Negative…
The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA
2009-10-01
In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude
The Effectiveness of Second Language Pronunciation Instruction: A Meta-Analysis
ERIC Educational Resources Information Center
Lee, Junkyu; Jang, Juhyun; Plonsky, Luke
2015-01-01
The goal of this study was to determine the overall effects of pronunciation instruction (PI) as well as the sources and extent of variance in observed effects. Toward this end, a comprehensive search for primary studies was conducted, yielding 86 unique reports testing the effects of PI. Each study was then coded on substantive and methodological…
ERIC Educational Resources Information Center
Aleman, Enrique, Jr.
2007-01-01
Purpose: The purpose of this article is to conduct a critical race policy analysis of Texas school finance policy. This empirical article examines three chapters of the Texas education code (TEC) and identifies the racial effects that the school funding system has on seven majority-Mexican American school districts. Methodology: Critical Race…
Narrating Linguistic Conflict: A Storytelling Analysis of the Language Conflict in Belgium
ERIC Educational Resources Information Center
De Keere, Kobe; Elchardus, Mark
2011-01-01
Few studies have addressed the question how the two main linguistic groups in Belgium (French and Flemish speakers) code each other. The research reported in this article is based on a storytelling forum of 56 persons that gathered five times. The storytelling sessions yielded 91 different stories about living in a bilingual society. These were…
NASA Astrophysics Data System (ADS)
Singh, Arwinder; Heoh, Saw Sor; Sing, Lee
2017-03-01
In this paper, we use Lee's 5 phase model code to configure both the India Bhabha Atomic Research Center (BARC) Plasma focus machine operating in the pressure (P0) range from 1 Torr to 14 Torr as well as the Imperial College Plasma Focus Machine operating in the pressure (P0) range from 0.5 Torr to 6 Torr to compare the computational neutron yield to the experimental neutron yield as well as to obtain the relationship between axial speed va, radial shock speed vs, piston speed vp and pinch temperature with P0 for these machines.
Lam, Raymond; Kruger, Estie; Tennant, Marc
2014-12-01
One disadvantage of the remarkable achievements in dentistry is that treatment options have never been more varied or confusing. This has made the concept of Evidenced Based Dentistry more applicable to modern dental practice. Despite merit in the concept whereby clinical decisions are guided by scientific evidence, there are problems with establishing a scientific base. This is no more challenging than in modern dentistry where the gap between rapidly developing products/procedures and its evidence base are widening. Furthermore, the burden of oral disease continues to remain high at the population level. These problems have prompted new approaches to enhancing research. The aim of this paper is to outline how a modified approach to dental coding may benefit clinical and population level research. Using publically assessable data obtained from the Australian Chronic Disease Dental Scheme and item codes contained within the Australian Schedule of Dental Services and Glossary, a suggested approach to dental informatics is illustrated. A selection of item codes have been selected and expanded with the addition of suffixes. These suffixes provided circumstantial information that will assist in assessing clinical outcomes such as success rates and prognosis. The use of item codes in administering the CDDS yielded a large database of item codes. These codes are amenable to dental informatics which has been shown to enhance research at both the clinical and population level. This is a cost effective method to supplement existing research methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Benchmarking Heavy Ion Transport Codes FLUKA, HETC-HEDS MARS15, MCNPX, and PHITS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronningen, Reginald Martin; Remec, Igor; Heilbronn, Lawrence H.
Powerful accelerators such as spallation neutron sources, muon-collider/neutrino facilities, and rare isotope beam facilities must be designed with the consideration that they handle the beam power reliably and safely, and they must be optimized to yield maximum performance relative to their design requirements. The simulation codes used for design purposes must produce reliable results. If not, component and facility designs can become costly, have limited lifetime and usefulness, and could even be unsafe. The objective of this proposal is to assess the performance of the currently available codes PHITS, FLUKA, MARS15, MCNPX, and HETC-HEDS that could be used for designmore » simulations involving heavy ion transport. We plan to access their performance by performing simulations and comparing results against experimental data of benchmark quality. Quantitative knowledge of the biases and the uncertainties of the simulations is essential as this potentially impacts the safe, reliable and cost effective design of any future radioactive ion beam facility. Further benchmarking of heavy-ion transport codes was one of the actions recommended in the Report of the 2003 RIA R&D Workshop".« less
NASA Technical Reports Server (NTRS)
Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.
1986-01-01
A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.
Caro, I; Stiles, W B
1997-01-01
Translating a verbal coding system from one language to another can yield unexpected insights into the process of communication in different cultures. This paper describes the problems and understandings we encountered as we translated a verbal response modes (VRM) taxonomy from English into Spanish. Standard translations of text (e.g., psychotherapeutic dialogue) systematically change the form of certain expressions, so supposedly equivalent expressions had different VRM codings in the two languages. Prominent examples of English forms whose translation had different codes in Spanish included tags, question forms, and "let's" expressions. Insofar as participants use such forms to convey nuances of their relationship, standard translations of counseling or psychotherapy sessions or other conversations may systematically misrepresent the relationship between the participants. The differences revealed in translating the VRM system point to subtle but important differences in the degrees of verbal directiveness and inclusion in English versus Spanish, which converge with other observations of differences in individualism and collectivism between Anglo and Hispanic cultures.
Silla, Toomas; Karadoulama, Evdoxia; Mąkosa, Dawid; Lubas, Michal; Jensen, Torben Heick
2018-05-15
Mammalian genomes are promiscuously transcribed, yielding protein-coding and non-coding products. Many transcripts are short lived due to their nuclear degradation by the ribonucleolytic RNA exosome. Here, we show that abolished nuclear exosome function causes the formation of distinct nuclear foci, containing polyadenylated (pA + ) RNA secluded from nucleocytoplasmic export. We asked whether exosome co-factors could serve such nuclear retention. Co-localization studies revealed the enrichment of pA + RNA foci with "pA-tail exosome targeting (PAXT) connection" components MTR4, ZFC3H1, and PABPN1 but no overlap with known nuclear structures such as Cajal bodies, speckles, paraspeckles, or nucleoli. Interestingly, ZFC3H1 is required for foci formation, and in its absence, selected pA + RNAs, including coding and non-coding transcripts, are exported to the cytoplasm in a process dependent on the mRNA export factor AlyREF. Our results establish ZFC3H1 as a central nuclear pA + RNA retention factor, counteracting nuclear export activity. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
TRIAD IV: Nationwide Survey of Medical Students' Understanding of Living Wills and DNR Orders.
Mirarchi, Ferdinando L; Ray, Matthew; Cooney, Timothy
2016-12-01
Living wills are a form of advance directives that help to protect patient autonomy. They are frequently encountered in the conduct of medicine. Because of their impact on care, it is important to understand the adequacy of current medical school training in the preparation of physicians to interpret these directives. Between April and August 2011 of third and fourth year medical students participated in an internet survey involving the interpretation of living wills. The survey presented a standard living will as a "stand-alone," a standard living will with the addition an emergent clinical scenario and then variations of the standard living will that included a code status designation ("DNR," "Full Code," or "Comfort Care"). For each version/ scenario, respondents were asked to assign a code status and choose interventions based on the cases presented. Four hundred twenty-five students from medical schools throughout the country responded. The majority indicated they had received some form of advance directive training and understood the concept of code status and the term "DNR." Based on a stand-alone document, 15% of respondents correctly denoted "full code" as the appropriate code status; adding a clinical scenario yielded negligible improvement. When a code designation was added to the living will, correct code status responses ranged from 68% to 93%, whereas correct treatment decisions ranged from 18% to 78%. Previous training in advance directives had no impact on these results. Our data indicate that the majority of students failed to understand the key elements of a living will; adding a code status designations improved correct responses with the exception of the term DNR. Misunderstanding of advance directives is a nationwide problem and jeopardizes patient safety. Medical School ethics curricula need to be improved to ensure competency with respect to understanding advance directives.
George, Jaiben; Newman, Jared M; Ramanathan, Deepak; Klika, Alison K; Higuera, Carlos A; Barsoum, Wael K
2017-09-01
Research using large administrative databases has substantially increased in recent years. Accuracy with which comorbidities are represented in these databases has been questioned. The purpose of this study was to evaluate the extent of errors in obesity coding and its impact on arthroplasty research. Eighteen thousand thirty primary total knee arthroplasties (TKAs) and 10,475 total hip arthroplasties (THAs) performed at a single healthcare system from 2004-2014 were included. Patients were classified as obese or nonobese using 2 methods: (1) body mass index (BMI) ≥30 kg/m 2 and (2) international classification of disease, 9th edition codes. Length of stay, operative time, and 90-day complications were collected. Effect of obesity on various outcomes was analyzed separately for both BMI- and coding-based obesity. From 2004 to 2014, the prevalence of BMI-based obesity increased from 54% to 63% and 40% to 45% in TKA and THA, respectively. The prevalence of coding-based obesity increased from 15% to 28% and 8% to 17% in TKA and THA, respectively. Coding overestimated the growth of obesity in TKA and THA by 5.6 and 8.4 times, respectively. When obesity was defined by coding, obesity was falsely shown to be a significant risk factor for deep vein thrombosis (TKA), pulmonary embolism (THA), and longer hospital stay (TKA and THA). The growth in obesity observed in administrative databases may be an artifact because of improvements in coding over the years. Obesity defined by coding can overestimate the actual effect of obesity on complications after arthroplasty. Therefore, studies using large databases should be interpreted with caution, especially when variables prone to coding errors are involved. Copyright © 2017 Elsevier Inc. All rights reserved.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Impact of Stellar Convection Criteria on the Nucleosynthetic Yields of Population III Supernovae.
NASA Astrophysics Data System (ADS)
Teffs, Jacob; Young, Tim; Lawlor, Tim
2018-01-01
A grid of 15-80 solar mass Z=0 stellar models are evolved to pre-core collapse using the stellar evolution code BRAHAMA. Each initial zero-age main sequence mass model star is evolved with two different convection criteria, Ledoux and Schwarzchild. The choice of convection produces significant changes in the evolutionary model tracks on the HR diagram, mass loss, and interior core and envelope structures. At onset of core collapse, a SNe explosion is initiated using a one-dimensional radiation-hydrodynamics code and followed for 400 days. The explosion energy is varied between 1-10 foes depending on the model as there are no observationally determined energies for population III supernovae. Due to structure differences, the Schwarzchild models resemble Type II-P SNe in their lightcurve while the Ledoux models resemble SN1987a, a Type IIpec. The nucleosynthesis is calculated using TORCH, a 3,208 isotope network, in a post process method using the hydrodynamic history. The Ledoux models have, on average, higher yields for elements above Fe compared to the Schwarzchild. Using a Salpeter IMF and other recently published population III IMF’s, the net integrated yields per solar mass are calculated and compared to published theoretical results and to published observations of extremely metal poor halo stars of [Fe/H] < -3. Preliminary results show the lower mass models of both criteria show similar trends to the extremely metal poor halo stars but more work and analysis is required.
Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans Gougar
2010-07-01
ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Blake, Margaret Lehman; Tompkins, Connie A.; Scharp, Victoria L.; Meigh, Kimberly M.; Wambaugh, Julie
2014-01-01
Coarse coding is the activation of broad semantic fields that can include multiple word meanings and a variety of features, including those peripheral to a word’s core meaning. It is a partially domain-general process related to general discourse comprehension and contributes to both literal and non-literal language processing. Adults with damage to the right cerebral hemisphere (RHD) and a coarse coding deficit are particularly slow to activate features of words that are relatively distant or peripheral. This manuscript reports a pre-efficacy study of Contextual Constraint Treatment (CCT), a novel, implicit treatment designed to increase the efficiency of coarse coding with the goal of improving narrative comprehension and other language performance that relies on coarse coding. Participants were four adults with RHD. The study used a single-subject controlled experimental design across subjects and behaviors. The treatment involves pre-stimulation, using a hierarchy of strong- and moderately-biased contexts, to prime the intended distantly-related features of critical stimulus words. Three of the four participants exhibited gains in auditory narrative discourse comprehension, the primary outcome measure. All participants exhibited generalization to untreated items. No strong generalization to processing nonliteral language was evident. The results indicate that CCT yields both improved efficiency of the coarse coding process and generalization to narrative comprehension. PMID:24983133
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
A distributed code for color in natural scenes derived from center-surround filtered cone signals
Kellner, Christian J.; Wachtler, Thomas
2013-01-01
In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289
Pesch, Megan H; Lumeng, Julie C
2017-12-15
Behavioral coding of videotaped eating and feeding interactions can provide researchers with rich observational data and unique insights into eating behaviors, food intake, food selection as well as interpersonal and mealtime dynamics of children and their families. Unlike self-report measures of eating and feeding practices, the coding of videotaped eating and feeding behaviors can allow for the quantitative and qualitative examinations of behaviors and practices that participants may not self-report. While this methodology is increasingly more common, behavioral coding protocols and methodology are not widely shared in the literature. This has important implications for validity and reliability of coding schemes across settings. Additional guidance on how to design, implement, code and analyze videotaped eating and feeding behaviors could contribute to advancing the science of behavioral nutrition. The objectives of this narrative review are to review methodology for the design, operationalization, and coding of videotaped behavioral eating and feeding data in children and their families, and to highlight best practices. When capturing eating and feeding behaviors through analysis of videotapes, it is important for the study and coding to be hypothesis driven. Study design considerations include how to best capture the target behaviors through selection of a controlled experimental laboratory environment versus home mealtime, duration of video recording, number of observations to achieve reliability across eating episodes, as well as technical issues in video recording and sound quality. Study design must also take into account plans for coding the target behaviors, which may include behavior frequency, duration, categorization or qualitative descriptors. Coding scheme creation and refinement occur through an iterative process. Reliability between coders can be challenging to achieve but is paramount to the scientific rigor of the methodology. Analysis approach is dependent on the how data were coded and collapsed. Behavioral coding of videotaped eating and feeding behaviors can capture rich data "in-vivo" that is otherwise unobtainable from self-report measures. While data collection and coding are time-intensive the data yielded can be extremely valuable. Additional sharing of methodology and coding schemes around eating and feeding behaviors could advance the science and field.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria
2018-01-01
Play during early life is a ubiquitous activity, and an individual’s propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play – whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora (N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult–infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult–infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions. PMID:29618994
Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria
2018-01-01
Play during early life is a ubiquitous activity, and an individual's propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play - whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora ( N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult-infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult-infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, R; Albanese, K; Lakshmanan, M
Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality formore » breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded aperture coherent scatter imaging a clinically viable method for breast cancer detection. United States Department of Homeland Security; Duke University Medical Center - Department of Radiology; Carl E Ravin Advanced Imaging Laboratories; Duke University Medical Physics Graduate Program.« less
McEvoy, Matthew D.; Smalley, Jeremy C.; Nietert, Paul J.; Field, Larry C.; Furse, Cory M.; Blenko, John W.; Cobb, Benjamin G.; Walters, Jenna L.; Pendarvis, Allen; Dalal, Nishita S.; Schaefer, John J.
2012-01-01
Introduction Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes. Methods The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists. Results Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97). Discussion We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA. PMID:22863996
Studying the response of a plastic scintillator to gamma rays using the Geant4 Monte Carlo code.
Ghadiri, Rasoul; Khorsandi, Jamshid
2015-05-01
To determine the gamma ray response function of an NE-102 scintillator and to investigate the gamma spectra due to the transport of optical photons, we simulated an NE-102 scintillator using Geant4 code. The results of the simulation were compared with experimental data. Good consistency between the simulation and data was observed. In addition, the time and spatial distributions, along with the energy distribution and surface treatments of scintillation detectors, were calculated. This simulation makes us capable of optimizing the photomultiplier tube (or photodiodes) position to yield the best coupling to the detector. Copyright © 2015 Elsevier Ltd. All rights reserved.
Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry
1987-01-01
Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Comparative study of viscoelastic properties using virgin yogurt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimonte, G.; Nelson, D.; Weaver, S.
We describe six different tests used to obtain a consistent set of viscoelastic properties for yogurt. Prior to yield, the shear modulus {mu} and viscosity {eta} are measured nondestructively using the speed and damping of elastic waves. Although new to foodstuffs, this technique has been applied to diverse materials from metals to the earth{close_quote}s crust. The resultant shear modulus agrees with {mu}{approximately}E/3 for incompressible materials, where the Young{close_quote}s modulus E is obtained from a stress{endash}strain curve in compression. The tensile yield stress {tau}{sub o} is measured in compression and tension, with good agreement. The conventional vane and cone/plate rheometers measuredmore » a shear stress yield {tau}{sub os}{approximately}{tau}{sub o}/{radical} (3) , as expected theoretically, but the inferred {open_quotes}apparent{close_quotes} viscosity from the cone/plate rheometer is much larger than the wave measurement due to the finite yield ({tau}{sub os}{ne}0). Finally, we inverted an open container of yogurt for 10{sup 6} s{gt}{eta}/{mu} and observed no motion. This demonstrates unequivocally that yogurt possesses a finite yield stress rather than a large viscosity. We present a constitutive model with a pre-yield viscosity to describe the damping of the elastic waves and use a simulation code to describe yielding in complex geometry. {copyright} {ital 1998 Society of Rheology.}« less
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
NASA Astrophysics Data System (ADS)
Zin, M. F. M.; Baijan, A. H.; Damideh, V.; Hashim, S. A.; Sabri, R. M.
2017-03-01
In this work, preliminary results of MNA-PF device as a Slow Focus Mode device are presented. Four different kinds of Rogowski Coils which have been designed and constructed for dI/dt signals measurements show that response frequency of Rogowski Coil can affect signal time resolution and delay which can change the discharge circuit inductance. Experimental results for 10 to 20 mbar Deuterium and 0.5 mbar to 6 mbar Argon which are captured by 630 MHz Rogowski coil in correlation with Lee Model Code are presented. Proper current fitting using Lee Model Code shows that the speed factor for MNA-PF device working with 13 mbar Deuterium is 30 kA/cm.torr1/2 at 14 kV which indicates that the device is operating at slow focus mode. Model parameters fm and fmr predicted by Lee Model Code during current fitting for 13 mbar Deuterium at 14kV were 0.025 and 0.31 respectively. Microspec-4 Neutron Detector was used to obtain the dose rate which was found to be maximum at 4.78 uSv/hr and also the maximum neutron yield calculated from Lee Model Code is 7.5E+03 neutron per shot.
Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate
2014-11-01
To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Barcus, F. Earle
Some 25-1/2 hours of Boston commercial television for children were monitored on a Saturday and Sunday in April 1975. The monitoring covered three network affiliated stations and two independent UHF stations. Monitoring, coding, and editing provided much statistical data, which was analyzed to yield findings in the areas of distribution of…
DNA Mapping Made Simple: An Intellectual Activity about the Genetic Modification of Organisms
ERIC Educational Resources Information Center
Marques, Miguel; Arrabaca, Joao; Chagas, Isabel
2004-01-01
Since the discovery of the DNA double helix (in 1953 by Watson and Crick), technologies have been developed that allow scientists to manipulate the genome of bacteria to produce human hormones, as well as the genome of crop plants to achieve high yield and enhanced flavor. The universality of the genetic code has allowed DNA isolated from a…
Strategic Bombing and the Thermonuclear Breakthrough: An Example of Disconnected Defense Planning,
1981-04-01
30 March 1944 attack against Nuremburg that saw the loss of 94 of about 800 aircraft (with serious damage to 72 others). However, while the British...tested a very powerful fission bomb, with the code name KING, that had an explosive yield of 500 kilotons. Its purpose was to provide the U.S. with an ex
NASA Astrophysics Data System (ADS)
Soare, S.; Yoon, J. W.; Cazacu, O.
2007-05-01
With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.
NASA Astrophysics Data System (ADS)
Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.
1995-08-01
The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.
Woo, K. M.; Betti, R.; Shvarts, D.; ...
2018-05-09
Tmore » he study of Rayleigh–aylor instability in the deceleration phase of inertial confinement fusion implosions is carried out using the three-dimensional (3-D) radiation-hydrodynamic Eulerian parallel code DEC3D. In this paper, we show that the yield-over-clean is a strong function of the residual kinetic energy (RKE) for low modes. Our analytical models indicate that the behavior of larger hot-spot volumes observed in low modes and the consequential pressure degradation can be explained in terms of increasing the RKE. hese results are derived using a simple adiabatic implosion model of the deceleration phase as well as through an extensive set of 3-D single-mode simulations using the code DEC3D. he effect of the bulk velocity broadening on ion temperature asymmetries is analyzed for different mode numbers ℓ = 1 -12. he jet observed in low mode ℓ = 1 is shown to cause the largest ion temperature variation in the mode spectrum. Finally, the vortices of high modes within the cold bubbles are shown to cause lower ion temperature variations than low modes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, K. M.; Betti, R.; Shvarts, D.
Tmore » he study of Rayleigh–aylor instability in the deceleration phase of inertial confinement fusion implosions is carried out using the three-dimensional (3-D) radiation-hydrodynamic Eulerian parallel code DEC3D. In this paper, we show that the yield-over-clean is a strong function of the residual kinetic energy (RKE) for low modes. Our analytical models indicate that the behavior of larger hot-spot volumes observed in low modes and the consequential pressure degradation can be explained in terms of increasing the RKE. hese results are derived using a simple adiabatic implosion model of the deceleration phase as well as through an extensive set of 3-D single-mode simulations using the code DEC3D. he effect of the bulk velocity broadening on ion temperature asymmetries is analyzed for different mode numbers ℓ = 1 -12. he jet observed in low mode ℓ = 1 is shown to cause the largest ion temperature variation in the mode spectrum. Finally, the vortices of high modes within the cold bubbles are shown to cause lower ion temperature variations than low modes.« less
3D Material Response Analysis of PICA Pyrolysis Experiments
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
The PICA decomposition experiments of Bessire and Minton are investigated using 3D material response analysis. The steady thermoelectric equations have been added to the CHAR code to enable analysis of the Joule-heated experiments and the DAKOTA optimization code is used to define the voltage boundary condition that yields the experimentally observed temperature response. This analysis has identified a potential spatial non-uniformity in the PICA sample temperature driven by the cooled copper electrodes and thermal radiation from the surface of the test article (Figure 1). The non-uniformity leads to a variable heating rate throughout the sample volume that has an effect on the quantitative results of the experiment. Averaging the results of integrating a kinetic reaction mechanism with the heating rates seen across the sample volume yield a shift of peak species production to lower temperatures that is more significant for higher heating rates (Figure 2) when compared to integrating the same mechanism at the reported heating rate. The analysis supporting these conclusions will be presented along with a proposed analysis procedure that permits quantitative use of the existing data. Time permitting, a status on the in-development kinetic decomposition mechanism based on this data will be presented as well.
Decay Properties of K-Vacancy States in Fe X-Fe XVII
NASA Technical Reports Server (NTRS)
Mendoza, C.; Kallman, T. R.; Bautista, M. A.; Palmeri, P.
2003-01-01
We report extensive calculations of the decay properties of fine-structure K-vacancy levels in Fe X-Fe XVII. A large set of level energies, wavelengths, radiative and Auger rates, and fluorescence yields has been computed using three different standard atomic codes, namely Cowan's HFR, AUTOSTRUCTURE and the Breit-Pauli R-matrix package. This multi-code approach is used to the study the effects of core relaxation, configuration interaction and the Breit interaction, and enables the estimate of statistical accuracy ratings. The Ksigma and KLL Auger widths have been found to be nearly independent of both the outer-electron configuration and electron occupancy keeping a constant ratio of 1.53 +/- 0.06. By comparing with previous theoretical and measured wavelengths, the accuracy of the present set is determined to be within 2 m Angstrom. Also, the good agreement found between the different radiative and Auger data sets that have been computed allow us to propose with confidence an accuracy rating of 20% for the line fluorescence yields greater than 0.01. Emission and absorption spectral features are predicted finding good correlation with measurements in both laboratory and astrophysical plasmas.
Deep Drawing Simulations With Different Polycrystalline Models
NASA Astrophysics Data System (ADS)
Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie
2004-06-01
The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.
Experimental study and simulation of 63Zn production via proton induce reaction.
Rostampour, Malihe; Sadeghi, Mahdi; Aboudzadeh, Mohammadreza; Hamidi, Saeid; Soltani, Naser; Novin, Fatemeh Bolouri; Rahiminejad, Ali; Rajabifar, Saeid
2018-06-01
The 63 Zn was produced by16.8 MeV proton irradiation of natural copper. Thick target yield for 63 Zn in the energy range of 16.8 →12.2 MeV was 2.47 ± 0.12 GBq/μA.h. Reasonable agreement between achieved experimental data and theoretical value of thick target yield for 63 Zn was observed. A simple separation procedure of 63 Zn from copper target was developed using cation exchange chromatography. About 88 ± 5% of the loaded activity was recovered. The performance of FLUKA to reproduce experimental data of thick target yield of 63 Zn is validated. The achieved results from this code were compared with the corresponding experimental data. This comparison demonstrated that FLUKA provides a suitable tool for the simulation of radionuclide production using proton irradiation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fission fragment yield distribution in the heavy-mass region from the 239Pu (nth,f ) reaction
NASA Astrophysics Data System (ADS)
Gupta, Y. K.; Biswas, D. C.; Serot, O.; Bernard, D.; Litaize, O.; Julien-Laferrière, S.; Chebboubi, A.; Kessedjian, G.; Sage, C.; Blanc, A.; Faust, H.; Köster, U.; Ebran, A.; Mathieu, L.; Letourneau, A.; Materna, T.; Panebianco, S.
2017-07-01
The fission fragment yield distribution has been measured in the 239Pu(nth,f ) reaction in the mass region of A =126 to 150 using the Lohengrin recoil-mass spectrometer. Three independent experimental campaigns were performed, allowing a significant reduction of the uncertainties compared to evaluated nuclear data libraries. The long-standing discrepancy of around 10% for the relative yield of A =134 reported in JEF-2.2 and JEFF-3.1.1 data libraries is finally solved. Moreover, the measured mass distribution in thermal neutron-induced fission does not show any significant dip around the shell closure (A =136 ) as seen in heavy-ion fission data of 208Pb(18O, f ) and 238U(18O, f ) reactions. Lastly, comparisons between our experimental data and the predictions from Monte Carlo codes (gef and fifrelin) are presented and discussed.
Dickinson, Dwight; Ramsey, Mary E; Gold, James M
2007-05-01
In focusing on potentially localizable cognitive impairments, the schizophrenia meta-analytic literature has overlooked the largest single impairment: on digit symbol coding tasks. To compare the magnitude of the schizophrenia impairment on coding tasks with impairments on other traditional neuropsychological instruments. MEDLINE and PsycINFO electronic databases and reference lists from identified articles. English-language studies from 1990 to present, comparing performance of patients with schizophrenia and healthy controls on coding tasks and cognitive measures representing at least 2 other cognitive domains. Of 182 studies identified, 40 met all criteria for inclusion in the meta-analysis. Means, standard deviations, and sample sizes were extracted for digit symbol coding and 36 other cognitive variables. In addition, we recorded potential clinical moderator variables, including chronicity/severity, medication status, age, and education, and potential study design moderators, including coding task variant, matching, and study publication date. Main analyses synthesized data from 37 studies comprising 1961 patients with schizophrenia and 1444 comparison subjects. Combination of mean effect sizes across studies by means of a random effects model yielded a weighted mean effect for digit symbol coding of g = -1.57 (95% confidence interval, -1.66 to -1.48). This effect compared with a grand mean effect of g = -0.98 and was significantly larger than effects for widely used measures of episodic memory, executive functioning, and working memory. Moderator variable analyses indicated that clinical and study design differences between studies had little effect on the coding task effect. Comparison with previous meta-analyses suggested that current results were representative of the broader literature. Subsidiary analysis of data from relatives of patients with schizophrenia also suggested prominent coding task impairments in this group. The 5-minute digit symbol coding task, reliable and easy to administer, taps an information processing inefficiency that is a central feature of the cognitive deficit in schizophrenia and deserves systematic investigation.
Mapping and DOWNFLOW simulation of recent lava flow fields at Mount Etna
NASA Astrophysics Data System (ADS)
Tarquini, Simone; Favalli, Massimiliano
2011-07-01
In recent years, progress in geographic information systems (GIS) and remote sensing techniques have allowed the mapping and studying of lava flows in unprecedented detail. A composite GIS technique is introduced to obtain high resolution boundaries of lava flow fields. This technique is mainly based on the processing of LIDAR-derived maps and digital elevation models (DEMs). The probabilistic code DOWNFLOW is then used to simulate eight large flow fields formed at Mount Etna in the last 25 years. Thanks to the collection of 6 DEMs representing Mount Etna at different times from 1986 to 2007, simulated outputs are obtained by running the DOWNFLOW code over pre-emplacement topographies. Simulation outputs are compared with the boundaries of the actual flow fields obtained here or derived from the existing literature. Although the selected fields formed in accordance with different emplacement mechanisms, flowed on different zones of the volcano over different topographies and were fed by different lava supplies of different durations, DOWNFLOW yields results close to the actual flow fields in all the cases considered. This outcome is noteworthy because DOWNFLOW has been applied by adopting a default calibration, without any specific tuning for the new cases considered here. This extensive testing proves that, if the pre-emplacement topography is available, DOWNFLOW yields a realistic simulation of a future lava flow based solely on a knowledge of the vent position. In comparison with deterministic codes, which require accurate knowledge of a large number of input parameters, DOWNFLOW turns out to be simple, fast and undemanding, proving to be ideal for systematic hazard and risk analyses.
Barnado, April; Casey, Carolyn; Carroll, Robert J; Wheless, Lee; Denny, Joshua C; Crofford, Leslie J
2017-05-01
To study systemic lupus erythematosus (SLE) in the electronic health record (EHR), we must accurately identify patients with SLE. Our objective was to develop and validate novel EHR algorithms that use International Classification of Diseases, Ninth Revision (ICD-9), Clinical Modification codes, laboratory testing, and medications to identify SLE patients. We used Vanderbilt's Synthetic Derivative, a de-identified version of the EHR, with 2.5 million subjects. We selected all individuals with at least 1 SLE ICD-9 code (710.0), yielding 5,959 individuals. To create a training set, 200 subjects were randomly selected for chart review. A subject was defined as a case if diagnosed with SLE by a rheumatologist, nephrologist, or dermatologist. Positive predictive values (PPVs) and sensitivity were calculated for combinations of code counts of the SLE ICD-9 code, a positive antinuclear antibody (ANA), ever use of medications, and a keyword of "lupus" in the problem list. The algorithms with the highest PPV were each internally validated using a random set of 100 individuals from the remaining 5,759 subjects. The algorithm with the highest PPV at 95% in the training set and 91% in the validation set was 3 or more counts of the SLE ICD-9 code, ANA positive (≥1:40), and ever use of both disease-modifying antirheumatic drugs and steroids, while excluding individuals with systemic sclerosis and dermatomyositis ICD-9 codes. We developed and validated the first EHR algorithm that incorporates laboratory values and medications with the SLE ICD-9 code to identify patients with SLE accurately. © 2016, American College of Rheumatology.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Zaneb, H; Hussain, M; Amjad, N; Qaim, S M
2016-06-01
Proton, deuteron and alpha-particle induced reactions on (87,88)Sr, (nat)Zr and (85)Rb targets were evaluated for the production of (87,88)Y. The literature data were compared with nuclear model calculations using the codes ALICE-IPPE, TALYS 1.6 and EMPIRE 3.2. The evaluated cross sections were generated; therefrom thick target yields of (87,88)Y were calculated. Analysis of radio-yttrium impurities and yield showed that the (87)Sr(p, n)(87)Y and (88)Sr(p, n)(88)Y reactions are the best routes for the production of (87)Y and (88)Y respectively. The calculated yield for the (87)Sr(p, n)(87)Y reaction is 104 MBq/μAh in the energy range of 14→2.7MeV. Similarly, the calculated yield for the (88)Sr(p, n)(88)Y reaction is 3.2 MBq/μAh in the energy range of 15→7MeV. Copyright © 2016 Elsevier Ltd. All rights reserved.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Total reaction cross sections in CEM and MCNP6 at intermediate energies
Kerby, Leslie M.; Mashnik, Stepan G.
2015-05-14
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Amalian, Jean-Arthur; Trinh, Thanh Tam; Lutz, Jean-François; Charles, Laurence
2016-04-05
Tandem mass spectrometry was evaluated as a reliable sequencing methodology to read codes encrypted in monodisperse sequence-coded oligo(triazole amide)s. The studied oligomers were composed of monomers containing a triazole ring, a short ethylene oxide segment, and an amide group as well as a short alkyl chain (propyl or isobutyl) which defined the 0/1 molecular binary code. Using electrospray ionization, oligo(triazole amide)s were best ionized as protonated molecules and were observed to adopt a single charge state, suggesting that adducted protons were located on every other monomer unit. Upon collisional activation, cleavages of the amide bond and of one ether bond were observed to proceed in each monomer, yielding two sets of complementary product ions. Distribution of protons over the precursor structure was found to remain unchanged upon activation, allowing charge state to be anticipated for product ions in the four series and hence facilitating their assignment for a straightforward characterization of any encoded oligo(triazole amide)s.
Third order harmonic imaging for biological tissues using three phase-coded pulses.
Ma, Qingyu; Gong, Xiufen; Zhang, Dong
2006-12-22
Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.
Study of neoclassical effects on the pedestal structure in ELMy H-mode plasmas
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Bateman, G.; Kritz, A. H.; Rafiq, T.; Park, G. Y.; Ku, S.; Chang, C. S.; Snyder, P. B.
2009-11-01
The neoclassical effects on the H-mode pedestal structure are investigated in this study. First principles' kinetic simulations of the neoclassical pedestal dynamics are combined with the MHD stability conditions for triggering ELM crashes that limit the pedestal width and height in H-mode plasmas. The neoclassical kinetic XGC0 code [1] is used to produce systematic scans over plasma parameters including plasma current, elongation, and triangularity. As plasma profiles evolve, the MHD stability limits of these profiles are analyzed with the ideal MHD stability ELITE code [2]. The scalings of the pedestal width and height are presented as a function of the scanned plasma parameters. Simulations with the XGC0 code, which include coupled ion-electron dynamics, yield predictions for both ion and electron pedestal profiles. Differences in the electron and ion pedestal scalings are investigated. [1] C.S. Chang et al, Phys. Plasmas 11 (2004) 2649. [2] P.B. Snyder et al, Phys. Plasmas, 9 (2002) 2037.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, N.M.
1984-02-01
This report describes a computer code (ALEX) developed to assist in AnaLysis of EXperimental data at the Oak Ridge Electron Linear Accelerator (ORELA). Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure; propagation of experimental uncertainties through that reduction procedure has in the past been viewed as even more difficult - if not impossible. The purpose of the code ALEX is to correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional inputmore » from the eperimentalist beyond that which is required for the data reduction itself. This report describes ALEX in detail, with special attention given to the case of transmission measurements (the code itself is applicable, with few changes, to any type of data). Application to the natural iron measurements of D.C. Larson et al. is described in some detail.« less
Total reaction cross sections in CEM and MCNP6 at intermediate energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie M.; Mashnik, Stepan G.
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Unsteady transonic flow calculations for realistic aircraft configurations
NASA Technical Reports Server (NTRS)
Batina, John T.; Seidel, David A.; Bland, Samuel R.; Bennett, Robert M.
1987-01-01
A transonic unsteady aerodynamic and aeroelasticity code has been developed for application to realistic aircraft configurations. The new code is called CAP-TSD which is an acronym for Computational Aeroelasticity Program - Transonic Small Disturbance. The CAP-TSD code uses a time-accurate approximate factorization (AF) algorithm for solution of the unsteady transonic small-disturbance equation. The AF algorithm is very efficient for solution of steady and unsteady transonic flow problems. It can provide accurate solutions in only several hundred time steps yielding a significant computational cost savings when compared to alternative methods. The new code can treat complete aircraft geometries with multiple lifting surfaces and bodies including canard, wing, tail, control surfaces, launchers, pylons, fuselage, stores, and nacelles. Applications are presented for a series of five configurations of increasing complexity to demonstrate the wide range of geometrical applicability of CAP-TSD. These results are in good agreement with available experimental steady and unsteady pressure data. Calculations for the General Dynamics one-ninth scale F-16C aircraft model are presented to demonstrate application to a realistic configuration. Unsteady results for the entire F-16C aircraft undergoing a rigid pitching motion illustrated the capability required to perform transonic unsteady aerodynamic and aeroelastic analyses for such configurations.
Statistical core design methodology using the VIPRE thermal-hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, M.W.; Feltus, M.A.
1994-12-31
This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less
The role of visual imagery in the retention of information from sentences.
Drose, G S; Allen, G L
1994-01-01
We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.
Simulations of a molecular plasma in collisional-radiative nonequilibrium
NASA Technical Reports Server (NTRS)
Cambier, Jean-Luc; Moreau, Stephane
1993-01-01
A code for the simulation of nonequilibrium plasmas is being developed, with the capability to couple the plasma fluid-dynamics for a single fluid with a collisional-radiative model, where electronic states are treated as separate species. The model allows for non-Boltzmann distribution of the electronic states. Deviations from the Boltzmann distributions are expected to occur in the rapidly ionizing regime behind a strong shock or in the recombining regime during a fast expansion. This additional step in modeling complexity is expected to yield more accurate predictions of the nonequilibrium state and the radiation spectrum and intensity. An attempt at extending the code to molecular plasma flows is presented. The numerical techniques used, the thermochemical model, and the results of some numerical tests are described.
The measurement of boundary layers on a compressor blade in cascade. Volume 2: Data tables
NASA Technical Reports Server (NTRS)
Zierke, William C.; Deutsch, Steven
1989-01-01
Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data have been acquired at a chord Reynolds number of 500,000 and at three incidence angles. Average values and 95 percent confidence bands were tabularized for the velocity, local turbulence intensity, skewness, kurtosis, and percent backflow. Tables also exist for the blade static-pressure distributions and boundary layer velocity profiles reconstructed to account for the normal pressure gradient.
Véliz, David; Vega-Retter, Caren; Quezada-Romegialli, Claudio
2016-01-01
The complete sequence of the mitochondrial genome for the Chilean silverside Basilichthys microlepidotus is reported for the first time. The entire mitochondrial genome was 16,544 bp in length (GenBank accession no. KM245937); gene composition and arrangement was conformed to that reported for most fishes and contained the typical structure of 2 rRNAs, 13 protein-coding genes, 22 tRNAs and a non-coding region. The assembled mitogenome was validated against sequences of COI and Control Region previously sequenced in our lab, functional genes from RNA-Seq data for the same species and the mitogenome of two other atherinopsid species available in Genbank.
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
Photoactivatable Mussel-Based Underwater Adhesive Proteins by an Expanded Genetic Code.
Hauf, Matthias; Richter, Florian; Schneider, Tobias; Faidt, Thomas; Martins, Berta M; Baumann, Tobias; Durkin, Patrick; Dobbek, Holger; Jacobs, Karin; Möglich, Andreas; Budisa, Nediljko
2017-09-19
Marine mussels exhibit potent underwater adhesion abilities under hostile conditions by employing 3,4-dihydroxyphenylalanine (DOPA)-rich mussel adhesive proteins (MAPs). However, their recombinant production is a major biotechnological challenge. Herein, a novel strategy based on genetic code expansion has been developed by engineering efficient aminoacyl-transfer RNA synthetases (aaRSs) for the photocaged noncanonical amino acid ortho-nitrobenzyl DOPA (ONB-DOPA). The engineered ONB-DOPARS enables in vivo production of MAP type 5 site-specifically equipped with multiple instances of ONB-DOPA to yield photocaged, spatiotemporally controlled underwater adhesives. Upon exposure to UV light, these proteins feature elevated wet adhesion properties. This concept offers new perspectives for the production of recombinant bioadhesives. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Plans for wind energy system simulation
NASA Technical Reports Server (NTRS)
Dreier, M. E.
1978-01-01
A digital computer code and a special purpose hybrid computer, were introduced. The digital computer program, the Root Perturbation Method or RPM, is an implementation of the classic floquet procedure which circumvents numerical problems associated with the extraction of Floquet roots. The hybrid computer, the Wind Energy System Time domain simulator (WEST), yields real time loads and deformation information essential to design and system stability investigations.
Imaging Gallium Nitride High Electron Mobility Transistors to Identify Point Defects
2014-03-01
streamline the sample preparation procedure to maximize the yield of successful samples to be analyzed chemically in an energy dispersive spectrometry...transmission electron microscope (STEM), sample preparation 15. NUMBER OF PAGES 103 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...Computer Engineering iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT The purpose of this thesis is to streamline the sample preparation
Self-Shielded Flux Cored Wire Evaluation
1980-12-01
5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...tensile and yield strength, percent elongation, and percent reduction of area reported. This testing was performed with a Satec 400 WHVP tensile
Haines, Brian M.; Aldrich, C. H.; Campbell, J. M.; ...
2017-04-24
In this study, we present the results of high-resolution simulations of the implosion of high-convergence layered indirect-drive inertial confinement fusion capsules of the type fielded on the National Ignition Facility using the xRAGE radiation-hydrodynamics code. In order to evaluate the suitability of xRAGE to model such experiments, we benchmark simulation results against available experimental data, including shock-timing, shock-velocity, and shell trajectory data, as well as hydrodynamic instability growth rates. We discuss the code improvements that were necessary in order to achieve favorable comparisons with these data. Due to its use of adaptive mesh refinement and Eulerian hydrodynamics, xRAGE is particularlymore » well suited for high-resolution study of multi-scale engineering features such as the capsule support tent and fill tube, which are known to impact the performance of high-convergence capsule implosions. High-resolution two-dimensional (2D) simulations including accurate and well-resolved models for the capsule fill tube, support tent, drive asymmetry, and capsule surface roughness are presented. These asymmetry seeds are isolated in order to study their relative importance and the resolution of the simulations enables the observation of details that have not been previously reported. We analyze simulation results to determine how the different asymmetries affect hotspot reactivity, confinement, and confinement time and how these combine to degrade yield. Yield degradation associated with the tent occurs largely through decreased reactivity due to the escape of hot fuel mass from the hotspot. Drive asymmetries and the fill tube, however, degrade yield primarily via burn truncation, as associated instability growth accelerates the disassembly of the hotspot. Finally, modeling all of these asymmetries together in 2D leads to improved agreement with experiment but falls short of explaining the experimentally observed yield degradation, consistent with previous 2D simulations of such capsules.« less
Robertson, Dale M.; Saad, David A.; Schwarz, Gregory E.
2014-01-01
Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. With geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and monitored loads throughout the MARB, SPAtially Referenced Regression On Watershed attributes (SPARROW) watershed models were constructed specifically for the MARB, which reduced simulation errors from previous models. Based on these models, N loads/yields were highest from the central part (centered over Iowa and Indiana) of the MARB (Corn Belt), and the highest P yields were scattered throughout the MARB. Spatial differences in yields from previous studies resulted from different descriptions of the dominant sources (N yields are highest with crop-oriented agriculture and P yields are highest with crop and animal agriculture and major WWTPs) and different descriptions of downstream transport. Delivered loads/yields from the MARB SPARROW models are used to rank subbasins, states, and eight-digit Hydrologic Unit Code basins (HUC8s) by N and P contributions and then rankings are compared with those from other studies. Changes in delivered yields result in an average absolute change of 1.3 (N) and 1.9 (P) places in state ranking and 41 (N) and 69 (P) places in HUC8 ranking from those made with previous national-scale SPARROW models. This information may help managers decide where efforts could have the largest effects (highest ranked areas) and thus reduce hypoxia in the Gulf of Mexico.
NASA Astrophysics Data System (ADS)
Li, Jun-Li; Li, Chun-Yan; Qiu, Rui; Yan, Cong-Chong; Xie, Wen-Zhang; Zeng, Zhi; Tung, Chuan-Jong
2013-09-01
In order to study the influence of inelastic cross sections on the simulation of direct DNA strand breaks induced by low energy electrons, six different sets of inelastic cross section data were calculated and loaded into the Geant4-DNA code to calculate the DNA strand break yields under the same conditions. The six sets of the inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with two different optical datasets and three different dispersion models, using the same Born corrections. Results show that the inelastic cross sections have a notable influence on the direct DNA strand break yields. The yields simulated with the inelastic cross sections based on Hayashi's optical data are greater than those based on Heller's optical data. The discrepancies are about 30-45% for the single strand break yields and 45-80% for the double strand break yields. Among the yields simulated with cross sections of the three different dispersion models, generally the greatest are those of the extended-Drude dispersion model, the second are those of the extended-oscillator-Drude dispersion model, and the last are those of the Ashley's δ-oscillator dispersion model. For the single strand break yields, the differences between the first two are very little and the differences between the last two are about 6-57%. For the double strand break yields, the biggest difference between the first two can be about 90% and the differences between the last two are about 17-70%.
NASA Astrophysics Data System (ADS)
Boss, Alan P.
2009-03-01
The disk instability mechanism for giant planet formation is based on the formation of clumps in a marginally gravitationally unstable protoplanetary disk, which must lose thermal energy through a combination of convection and radiative cooling if they are to survive and contract to become giant protoplanets. While there is good observational support for forming at least some giant planets by disk instability, the mechanism has become theoretically contentious, with different three-dimensional radiative hydrodynamics codes often yielding different results. Rigorous code testing is required to make further progress. Here we present two new analytical solutions for radiative transfer in spherical coordinates, suitable for testing the code employed in all of the Boss disk instability calculations. The testing shows that the Boss code radiative transfer routines do an excellent job of relaxing to and maintaining the analytical results for the radial temperature and radiative flux profiles for a spherical cloud with high or moderate optical depths, including the transition from optically thick to optically thin regions. These radial test results are independent of whether the Eddington approximation, diffusion approximation, or flux-limited diffusion approximation routines are employed. The Boss code does an equally excellent job of relaxing to and maintaining the analytical results for the vertical (θ) temperature and radiative flux profiles for a disk with a height proportional to the radial distance. These tests strongly support the disk instability mechanism for forming giant planets.
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
Application guide for AFINCH (Analysis of Flows in Networks of Channels) described by NHDPlus
Holtschlag, David J.
2009-01-01
AFINCH (Analysis of Flows in Networks of CHannels) is a computer application that can be used to generate a time series of monthly flows at stream segments (flowlines) and water yields for catchments defined in the National Hydrography Dataset Plus (NHDPlus) value-added attribute system. AFINCH provides a basis for integrating monthly flow data from streamgages, water-use data, monthly climatic data, and land-cover characteristics to estimate natural monthly water yields from catchments by user-defined regression equations. Images of monthly water yields for active streamgages are generated in AFINCH and provide a basis for detecting anomalies in water yields, which may be associated with undocumented flow diversions or augmentations. Water yields are multiplied by the drainage areas of the corresponding catchments to estimate monthly flows. Flows from catchments are accumulated downstream through the streamflow network described by the stream segments. For stream segments where streamgages are active, ratios of measured to accumulated flows are computed. These ratios are applied to upstream water yields to proportionally adjust estimated flows to match measured flows. Flow is conserved through the NHDPlus network. A time series of monthly flows can be generated for stream segments that average about 1-mile long, or monthly water yields from catchments that average about 1 square mile. Estimated monthly flows can be displayed within AFINCH, examined for nonstationarity, and tested for monotonic trends. Monthly flows also can be used to estimate flow-duration characteristics at stream segments. AFINCH generates output files of monthly flows and water yields that are compatible with ArcMap, a geographical information system analysis and display environment. Chloropleth maps of monthly water yield and flow can be generated and analyzed within ArcMap by joining NHDPlus data structures with AFINCH output. Matlab code for the AFINCH application is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soare, S.; Cazacu, O.; Yoon, J. W.
With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stressmore » states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.« less
Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS
NASA Astrophysics Data System (ADS)
Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel
2016-01-01
The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.
Activity-Dependent Human Brain Coding/Noncoding Gene Regulatory Networks
Lipovich, Leonard; Dachet, Fabien; Cai, Juan; Bagla, Shruti; Balan, Karina; Jia, Hui; Loeb, Jeffrey A.
2012-01-01
While most gene transcription yields RNA transcripts that code for proteins, a sizable proportion of the genome generates RNA transcripts that do not code for proteins, but may have important regulatory functions. The brain-derived neurotrophic factor (BDNF) gene, a key regulator of neuronal activity, is overlapped by a primate-specific, antisense long noncoding RNA (lncRNA) called BDNFOS. We demonstrate reciprocal patterns of BDNF and BDNFOS transcription in highly active regions of human neocortex removed as a treatment for intractable seizures. A genome-wide analysis of activity-dependent coding and noncoding human transcription using a custom lncRNA microarray identified 1288 differentially expressed lncRNAs, of which 26 had expression profiles that matched activity-dependent coding genes and an additional 8 were adjacent to or overlapping with differentially expressed protein-coding genes. The functions of most of these protein-coding partner genes, such as ARC, include long-term potentiation, synaptic activity, and memory. The nuclear lncRNAs NEAT1, MALAT1, and RPPH1, composing an RNAse P-dependent lncRNA-maturation pathway, were also upregulated. As a means to replicate human neuronal activity, repeated depolarization of SY5Y cells resulted in sustained CREB activation and produced an inverse pattern of BDNF-BDNFOS co-expression that was not achieved with a single depolarization. RNAi-mediated knockdown of BDNFOS in human SY5Y cells increased BDNF expression, suggesting that BDNFOS directly downregulates BDNF. Temporal expression patterns of other lncRNA-messenger RNA pairs validated the effect of chronic neuronal activity on the transcriptome and implied various lncRNA regulatory mechanisms. lncRNAs, some of which are unique to primates, thus appear to have potentially important regulatory roles in activity-dependent human brain plasticity. PMID:22960213
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
Unified analytic representation of physical sputtering yield
NASA Astrophysics Data System (ADS)
Janev, R. K.; Ralchenko, Yu. V.; Kenmotsu, T.; Hosaka, K.
2001-03-01
Generalized energy parameter η= η( ɛ, δ) and normalized sputtering yield Ỹ(η) , where ɛ= E/ ETF and δ= Eth/ ETF, are introduced to achieve a unified representation of all available experimental and sputtering data at normal ion incidence. The sputtering data in the new Ỹ(η) representation retain their original uncertainties. The Ỹ(η) data can be fitted to a simple three-parameter analytic expression with an rms deviation of 32%, well within the uncertainties of original data. Both η and Ỹ(η) have correct physical behavior in the threshold and high-energy regions. The available theoretical data produced by the TRIM.SP code can also be represented by the same single analytic function Ỹ(η) with a similar accuracy.
Activation cross-sections of proton induced reactions on vanadium in the 37-65 MeV energy range
NASA Astrophysics Data System (ADS)
Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.
2016-08-01
Experimental excitation functions for proton induced reactions on natural vanadium in the 37-65 MeV energy range were measured with the activation method using a stacked foil irradiation technique. By using high resolution gamma spectrometry cross-section data for the production of 51,48Cr, 48V, 48,47,46,44m,44g,43Sc and 43,42K were determined. Comparisons with the earlier published data are presented and results predicted by different theoretical codes (EMPIRE and TALYS) are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental yield data. Depth distribution curves to be used for thin layer activation (TLA) are also presented.
Ally, Moonis Raza; Munk, Jeffrey D.; Baxter, Van D.; ...
2015-06-26
This twelve-month field study analyzes the performance of a 7.56W (2.16- ton) water-to-air-ground source heat pump (WA-GSHP) to satisfy domestic space conditioning loads in a 253 m 2 house in a mixed-humid climate in the United States. The practical feasibility of using the ground as a source of renewable energy is clearly demonstrated. Better than 75% of the energy needed for space heating was extracted from the ground. The average monthly electricity consumption for space conditioning was only 40 kWh at summer and winter thermostat set points of 24.4°C and 21.7°C, respectively. The WA-GSHP shared the same 94.5 m verticalmore » bore ground loop with a separate water-to-water ground-source heat pump (WW-GSHP) for meeting domestic hot water needs in the same house. Sources of systemic irreversibility, the main cause of lost work are identified using Exergy and energy analysis. Quantifying the sources of Exergy and energy losses is essential for further systemic improvements. The research findings suggest that the WA-GSHPs are a practical and viable technology to reduce primary energy consumption and greenhouse gas emissions under the IECC 2012 Standard, as well as the European Union (EU) 2020 targets of using renewable energy resources.« less
Field Testing of Compartmentalization Methods for Multifamily Construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, K.; Lstiburek, J.
The 2012 IECC has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure for both single-family and multifamily construction in Climate Zones 3-8. Other programs (LEED, ASHRAE 189, ASHRAE 62.2) have similar or tighter compartmentalization requirements, driving the need for easier and more effective methods of compartmentalization in multifamily buildings. Builders and practitioners have found that fire-resistance rated wall assemblies are a major source of difficulty in air sealing/compartmentalization, particularly in townhouse construction. This problem is exacerbated when garages are “tucked in” to the units and living space is located over the garages. In thismore » project, Building Science Corporation examined the taping of exterior sheathing details to improve air sealing results in townhouse and multifamily construction, when coupled with a better understanding of air leakage pathways. Current approaches are cumbersome, expensive, time consuming, and ineffective; these details were proposed as a more effective and efficient method. The effectiveness of these air sealing methods was tested with blower door testing, including “nulled” or “guarded” testing (adjacent units run at equal test pressure to null out inter-unit air leakage, or “pressure neutralization”). Pressure diagnostics were used to evaluate unit-to-unit connections and series leakage pathways (i.e., air leakage from exterior, into the fire-resistance rated wall assembly, and to the interior).« less
Apsidal rotation in the eclipsing binary AG Persei
NASA Technical Reports Server (NTRS)
Koch, Robert H.; Woodward, Edith J.
1987-01-01
New three-filter light curves of AG Per are given. These yield times of minimum light in accord with the known rate of apsidal rotation but do not improve that rate. These light curves and all other published historical ones have been treated with the code EBOP and are shown to give largely consistent geometric and photometric parameters no matter which orientation of the orbit is displayed to the observer.
Child Support Enforcement: A Framework for Evaluating Costs, Benefits, and Effects.
1991-03-01
efforts to gain and enforce child support awards might yield additional collections on behalf of these children, they would surely entail additional...framework for evaluating the full costs and ne . effects of child support enforcement.I This framework could assist your office and others in planning...following results of our develop- S . ............. .. mental work: (1) models of the child support enforcement system activi- AvajiabilitY Codes. ties
Reactions of Hydrogen Chloride and Boron Trichloride with Trimethylsilylamino Groups
1989-04-04
SUPPLEMENTARY NOTATION 17, COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and kientify by block number) FIELD ,,GROUP SU8 -GROUP...nitride preceramic polymers . Due to the low yield multistage synthesis, alternate routes to isomeric compositions and intermediates needed to be...Organo- metallic Polymers , Zeldin, M., Wynne, K. J., Allcock, H. R.; Ed, ACS Symposium Series 360. (4) Ebsworth, E.A.V. Volatile Silicon Compounds
2010-01-01
Seemingly not . Repeated measures analysis of variance (ANOVA) for posttest - pretest score gain x training product interaction yielded a non-significant...Code 15. Supplemental Notes Work was accomplished under approved task AM-A-07-HRR-521 16. Abstract This research has two main...1 Purpose of This Research
Ion Kinetics in Silane Plasmas
1988-04-20
field and orthogonal to the excite plates. The image current is amplified, digitized, and Fourier analyzed to yield a spectrum of 0 cyclotron...Laboratory (AFWAL/P0OC). 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 20 U9 09 03...d.c., microwave, or capacatively coupled, radiofrequency electric fields . Alternatively, hollow cath- ode or electron beam approaches may be employed
Solving Semantic Searches for Source Code
2012-11-01
but of input and expected output pairs. In this domain, those inputs take the form of strings and outputs could be one of sev- eral datatypes ...for some relaxation of CPi that yields C ′ Pi . Encoding weakening is performed by systematically making the constraints on a particular datatype ...the datatypes that can hold concrete or symbolic values: integers, characters, booleans, and strings. The Java implementation uses all the data types
Photofission of 197Au and 209Bi at intermediate energies
NASA Astrophysics Data System (ADS)
Haba, H.; Sakamoto, K.; Igarashi, M.; Kasaoka, M.; Washiyama, K.; Matsumura, H.; Oura, Y.; Shibata, S.; Furukawa, M.; Fujiwara, I.
2003-01-01
Recoil properties and yields of radionuclides formed in the photofission of 197Au and 209Bi by bremsstrahlung of end-point energies ( E 0) from 300 to 1100 MeV have been investigated using the thick-target thick-catcher method. The kinetic energies T of the residual nuclei were deduced based on the two-step vector model and discussed by comparing with the reported results on protoninduced reactions as well as those on photospallation. The charge distribution was reproduced by a Gaussian function with the most probable charge Zp expressed by a linera function of the product mass number A and with the A-independent width FWHM CD. Based on the charge distribution parameters, the symmetric mass yield distribution with the most probable mass A p of 92 m.u. and the width FWHM MD of 39 m.u. was obtained for 197Au at E 0≥600 MeV. The A p value for 209Bi was larger by 4 m.u. than that for 197Au and the FWHM MD was smaller by 6 m.u. A comparison with the calculations using the Photon-induced Intranuclear Cascade Analysis 3 code combined with the Generalized Evaporation Model code (PICA3/GEM) was also performed.
Numerical simulation of exploding pusher targets
NASA Astrophysics Data System (ADS)
Atzeni, S.; Rosenberg, M. J.; Gatu Johnson, M.; Petrasso, R. D.
2017-10-01
Exploding pusher targets, i.e. gas-filled large aspect-ratio glass or plastic shells, driven by a strong laser-generated shock, are widely used as pulsed sources of neutrons and fast charged particles. Recent experiments on exploding pushers provided evidence for the transition from a purely fluid behavior to a kinetic one. Indeed, fluid models largely overpredict yield and temperature as the Knudsen number Kn (ratio of ion mean-free path to compressed gas radius) is comparable or larger than one. At Kn = 0.3 - 1, fluid codes reasonably estimate integral quantities as yield and neutron-averaged temperatures, but do not reproduce burn radii, burn profiles and DD/DHe3 yield ratio. This motivated a detailed simulation study of intermediate-Kn exploding pushers. We will show how simulation results depend on models for laser-interaction, electron conductivity (flux-limited local vs nonlocal), viscosity (physical vs artificial), and ion mixing. Work partially supported by Sapienza Project C26A15YTMA, Sapienza 2016 (n. 257584), and Eurofusion Project AWP17-ENR-IFE-CEA-01.
Bioterrorism-related Inhalational Anthrax in an Elderly Woman, Connecticut, 2001
Mead, Paul; Armstrong, Gregory L.; Painter, John; Kelley, Katherine A.; Hoffmaster, Alex R.; Mayo, Donald; Barden, Diane; Ridzon, Renee; Parashar, Umesh; Teshale, Eyasu Habtu; Williams, Jen; Noviello, Stephanie; Perz, Joseph F.; Mast, Eric E.; Swerdlow, David L.; Hadler, James L.
2003-01-01
On November 20, 2001, inhalational anthrax was confirmed in an elderly woman from rural Connecticut. To determine her exposure source, we conducted an extensive epidemiologic, environmental, and laboratory investigation. Molecular subtyping showed that her isolate was indistinguishable from isolates associated with intentionally contaminated letters. No samples from her home or community yielded Bacillus anthracis, and she received no first-class letters from facilities known to have processed intentionally contaminated letters. Environmental sampling in the regional Connecticut postal facility yielded B. anthracis spores from 4 (31%) of 13 sorting machines. One extensively contaminated machine primarily processes bulk mail. A second machine that does final sorting of bulk mail for her zip code yielded B. anthracis on the column of bins for her carrier route. The evidence suggests she was exposed through a cross-contaminated bulk mail letter. Such cross-contamination of letters and postal facilities has implications for managing the response to future B. anthracis–contaminated mailings. PMID:12781007
Effect on the Lunar Exosphere of a CME Passage
NASA Technical Reports Server (NTRS)
Killen, Rosemary M.; Hurley, Dana M.; Farrell, William M.; Sarantos, Menelaos
2011-01-01
It has long been recognized that solar wind bombardment onto exposed surfaces in the solar system will produce an energetic component to the exospheres about those bodies. Laboratory experiments have shown that the sputter yield can be noticeably increased in the case of a good insulating surface. It is now known that the solar wind composition is highly dependent on the origin of the particular plasma. Using the measured composition of the slow wind, fast wind, solar energetic particle (SEP) population, and coronal mass ejection (CME), broken down into its various components, we have estimated the total sputter yield for each type of solar wind. The heavy ion component, especially the He++ component, greatly enhances the total sputter yield during times when the heavy ion population is enhanced, most notably during a coronal mass ejection. To simulate the effect on the lunar exosphere of a CME passage past the Moon, we ran a Monte Carlo code for the species Na, K, Mg and Ca.
NEST: a comprehensive model for scintillation yield in liquid xenon
Szydagis, M.; Barry, N.; Kazkaz, K.; ...
2011-10-03
Here, a comprehensive model for explaining scintillation yield in liquid xenon is introduced. We unify various definitions of work function which abound in the literature and incorporate all available data on electron recoil scintillation yield. This results in a better understanding of electron recoil, and facilitates an improved description of nuclear recoil. An incident gamma energy range of O(1 keV) to O(1 MeV) and electric fields between 0 and O(10 kV/cm) are incorporated into this heuristic model. We show results from a Geant4 implementation, but because the model has a few free parameters, implementation in any simulation package should bemore » simple. We use a quasi-empirical approach, with an objective of improving detector calibrations and performance verification. The model will aid in the design and optimization of future detectors. This model is also easy to extend to other noble elements. In this paper we lay the foundation for an exhaustive simulation code which we call NEST (Noble Element Simulation Technique).« less
Fast neutron production from lithium converters and laser driven protons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storm, M.; Jiang, S.; Wertepny, D.
2013-05-15
Experiments to generate neutrons from the {sup 7}Li(p,n){sup 7}Be reaction with 60 J, 180 fs laser pulses have been performed at the Texas Petawatt Laser Facility at the University of Texas at Austin. The protons were accelerated from the rear surface of a thin target membrane using the target-normal-sheath-acceleration mechanism. The neutrons were generated in nuclear reactions caused by the subsequent proton bombardment of a pure lithium foil of natural isotopic abundance. The neutron energy ranged up to 2.9 MeV. The total yield was estimated to be 1.6 × 10{sup 7} neutrons per steradian. An extreme ultra-violet light camera, usedmore » to image the target rear surface, correlated variations in the proton yield and peak energy to target rear surface ablation. Calculations using the hydrodynamics code FLASH indicated that the ablation resulted from a laser pre-pulse of prolonged intensity. The ablation severely limited the proton acceleration and neutron yield.« less
Self-Recirculating Casing Treatment Concept for Enhanced Compressor Performance
NASA Technical Reports Server (NTRS)
Hathaway, Michael D.
2002-01-01
A state-of-the-art CFD code (APNASA) was employed in a computationally based investigation of the impact of casing bleed and injection on the stability and performance of a moderate speed fan rotor wherein the stalling mass flow is controlled by tip flow field breakdown. The investigation was guided by observed trends in endwall flow characteristics (e.g., increasing endwall aerodynamic blockage) as stall is approached and based on the hypothesis that application of bleed or injection can mitigate these trends. The "best" bleed and injection configurations were then combined to yield a self-recirculating casing treatment concept. The results of this investigation yielded: 1) identification of the fluid mechanisms which precipitate stall of tip critical blade rows, and 2) an approach to recirculated casing treatment which results in increased compressor stall range with minimal or no loss in efficiency. Subsequent application of this approach to a high speed transonic rotor successfully yielded significant improvements in stall range with no loss in compressor efficiency.
Kavuluru, Ramakanth; Rios, Anthony; Lu, Yuan
2015-01-01
Background Diagnosis codes are assigned to medical records in healthcare facilities by trained coders by reviewing all physician authored documents associated with a patient's visit. This is a necessary and complex task involving coders adhering to coding guidelines and coding all assignable codes. With the popularity of electronic medical records (EMRs), computational approaches to code assignment have been proposed in the recent years. However, most efforts have focused on single and often short clinical narratives, while realistic scenarios warrant full EMR level analysis for code assignment. Objective We evaluate supervised learning approaches to automatically assign international classification of diseases (ninth revision) - clinical modification (ICD-9-CM) codes to EMRs by experimenting with a large realistic EMR dataset. The overall goal is to identify methods that offer superior performance in this task when considering such datasets. Methods We use a dataset of 71,463 EMRs corresponding to in-patient visits with discharge date falling in a two year period (2011–2012) from the University of Kentucky (UKY) Medical Center. We curate a smaller subset of this dataset and also use a third gold standard dataset of radiology reports. We conduct experiments using different problem transformation approaches with feature and data selection components and employing suitable label calibration and ranking methods with novel features involving code co-occurrence frequencies and latent code associations. Results Over all codes with at least 50 training examples we obtain a micro F-score of 0.48. On the set of codes that occur at least in 1% of the two year dataset, we achieve a micro F-score of 0.54. For the smaller radiology report dataset, the classifier chaining approach yields best results. For the smaller subset of the UKY dataset, feature selection, data selection, and label calibration offer best performance. Conclusions We show that datasets at different scale (size of the EMRs, number of distinct codes) and with different characteristics warrant different learning approaches. For shorter narratives pertaining to a particular medical subdomain (e.g., radiology, pathology), classifier chaining is ideal given the codes are highly related with each other. For realistic in-patient full EMRs, feature and data selection methods offer high performance for smaller datasets. However, for large EMR datasets, we observe that the binary relevance approach with learning-to-rank based code reranking offers the best performance. Regardless of the training dataset size, for general EMRs, label calibration to select the optimal number of labels is an indispensable final step. PMID:26054428
Kavuluru, Ramakanth; Rios, Anthony; Lu, Yuan
2015-10-01
Diagnosis codes are assigned to medical records in healthcare facilities by trained coders by reviewing all physician authored documents associated with a patient's visit. This is a necessary and complex task involving coders adhering to coding guidelines and coding all assignable codes. With the popularity of electronic medical records (EMRs), computational approaches to code assignment have been proposed in the recent years. However, most efforts have focused on single and often short clinical narratives, while realistic scenarios warrant full EMR level analysis for code assignment. We evaluate supervised learning approaches to automatically assign international classification of diseases (ninth revision) - clinical modification (ICD-9-CM) codes to EMRs by experimenting with a large realistic EMR dataset. The overall goal is to identify methods that offer superior performance in this task when considering such datasets. We use a dataset of 71,463 EMRs corresponding to in-patient visits with discharge date falling in a two year period (2011-2012) from the University of Kentucky (UKY) Medical Center. We curate a smaller subset of this dataset and also use a third gold standard dataset of radiology reports. We conduct experiments using different problem transformation approaches with feature and data selection components and employing suitable label calibration and ranking methods with novel features involving code co-occurrence frequencies and latent code associations. Over all codes with at least 50 training examples we obtain a micro F-score of 0.48. On the set of codes that occur at least in 1% of the two year dataset, we achieve a micro F-score of 0.54. For the smaller radiology report dataset, the classifier chaining approach yields best results. For the smaller subset of the UKY dataset, feature selection, data selection, and label calibration offer best performance. We show that datasets at different scale (size of the EMRs, number of distinct codes) and with different characteristics warrant different learning approaches. For shorter narratives pertaining to a particular medical subdomain (e.g., radiology, pathology), classifier chaining is ideal given the codes are highly related with each other. For realistic in-patient full EMRs, feature and data selection methods offer high performance for smaller datasets. However, for large EMR datasets, we observe that the binary relevance approach with learning-to-rank based code reranking offers the best performance. Regardless of the training dataset size, for general EMRs, label calibration to select the optimal number of labels is an indispensable final step. Copyright © 2015 Elsevier B.V. All rights reserved.
Wong, Ngo Yin; Xing, Hang; Tan, Li Huey; Lu, Yi
2013-02-27
While much work has been devoted to nanoscale assembly of functional materials, selective reversible assembly of components in the nanoscale pattern at selective sites has received much less attention. Exerting such a reversible control of the assembly process will make it possible to fine-tune the functional properties of the assembly and to realize more complex designs. Herein, by taking advantage of different binding affinities of biotin and desthiobiotin toward streptavidin, we demonstrate selective and reversible decoration of DNA origami tiles with streptavidin, including revealing an encrypted Morse code "NANO" and reversible exchange of uppercase letter "I" with lowercase "i". The yields of the conjugations are high (>90%), and the process is reversible. We expect this versatile conjugation technique to be widely applicable with different nanomaterials and templates.
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
Prosdocimi, Francisco; Souto, Helena Magarinos; Ruschi, Piero Angeli; Furtado, Carolina; Jennings, W Bryan
2016-09-01
The genome of the versicoloured emerald hummingbird (Amazilia versicolor) was partially sequenced in one-sixth of an Illumina HiSeq lane. The mitochondrial genome was assembled using MIRA and MITObim software, yielding a circular molecule of 16,861 bp in length and deposited in GenBank under the accession number KF624601. The mitogenome contained 13 protein-coding genes, 22 transfer tRNAs, 2 ribosomal RNAs and 1 non-coding control region. The molecule was assembled using 21,927 sequencing reads of 100 bp each, resulting in ∼130 × coverage of uniformly distributed reads along the genome. This is the forth mitochondrial genome described for this highly diverse family of birds and may benefit further phylogenetic, phylogeographic, population genetic and species delimitation studies of hummingbirds.
Shen, Kang-Ning; Chen, Ching-Hung; Hsiao, Chung-Der
2016-05-01
In this study, the complete mitogenome sequence of hornlip mullet Plicomugil labiosus (Teleostei: Mugilidae) has been sequenced by next-generation sequencing method. The assembled mitogenome, consisting of 16,829 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes and a non-coding control region of D-loop. D-loop contains 1057 bp length is located between tRNA-Pro and tRNA-Phe. The overall base composition of P. labiosus is 28.0% for A, 29.3% for C, 15.5% for G and 27.2% for T. The complete mitogenome may provide essential and important DNA molecular data for further population, phylogenetic and evolutionary analysis for Mugilidae.
Shen, Kang-Ning; Tsai, Shiou-Yi; Chen, Ching-Hung; Hsiao, Chung-Der; Durand, Jean-Dominique
2016-11-01
In this study, the complete mitogenome sequence of largescale mullet (Teleostei: Mugilidae) has been sequenced by the next-generation sequencing method. The assembled mitogenome, consisting of 16,832 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein-coding genes, 22 transfer RNAs, two ribosomal RNAs genes, and a non-coding control region of D-loop. D-loop which has a length of 1094 bp is located between tRNA-Pro and tRNA-Phe. The overall base composition of largescale mullet is 27.8% for A, 30.1% for C, 16.2% for G, and 25.9% for T. The complete mitogenome may provide essential and important DNA molecular data for further phylogenetic and evolutionary analysis for Mugilidae.
Locating relationship and communication issues among stressors associated with breast cancer.
Weber, Kirsten M; Solomon, Denise Haunani
2008-11-01
This article clarifies how the social contexts in which breast cancer survivors live can contribute to the stress they experience because of the disease. Guided by Solomon and Knobloch's (2004) relational turbulence model and Petronio's (2002) communication privacy management theory, this study explores personal relationship and communication boundary issues within stressors that are associated with the diagnosis, treatment, and early survivorship of breast cancer. A qualitative analysis of discourse posted on breast cancer discussion boards and weblogs using the constant comparative method and open-coding techniques revealed 12 sources of stress. Using axial coding methods and probing these topics for underlying relationship and communication issues yielded 5 themes. The discussion highlights the implications of the findings for the theories that guided this investigation and for breast cancer survivorship more generally.
(86)Y production via (86)Sr(p,n) for PET imaging at a cyclotron.
Sadeghi, M; Aboudzadeh, M; Zali, A; Zeinali, B
2009-01-01
Excitation functions of (86)Y production via (86)Sr(p,xn), (86)Sr(d,xn), (85)Rb(alpha,xn), (85)Rb((3)He,xn), and (nat)Zr(d,alphaxn) reactions were studied by means of ALICE-ASH code and the results were compared with ALICE-91 code and experimental data. The greatest nuclear reaction of cyclotron (86)Y production was found out as (86)Sr(p,n)(86)Y process. (86)Y production yield was calculated too. A SrCO(3) thick film was deposited on a copper substrate by sedimentation method. The deposited (nat)SrCO(3) was irradiated with 15MeV proton at 30microA current beam. The separation of Y from Cu and Sr was carried out by means of dual ion exchange chromatography.
High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II
NASA Astrophysics Data System (ADS)
Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark
2018-06-01
To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.
Pion Production from 5-15 GeV Beam for the Neutrino Factory Front-End Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prior, Gersende
2010-03-30
For the neutrino factory front-end study, the production of pions from a proton beam of 5-8 and 14 GeV kinetic energy on a Hg jet target has been simulated. The pion yields for two versions of the MARS15 code and two different field configurations have been compared. The particles have also been tracked from the target position down to the end of the cooling channel using the ICOOL code and the neutrino factory baseline lattice. The momentum-angle region of pions producing muons that survived until the end of the cooling channel has been compared with the region covered by HARPmore » data and the number of pions/muons as a function of the incoming beam energy is also reported.« less
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
NASA Astrophysics Data System (ADS)
Ali, Y.; Tabassam, U.; Suleymanov, M.; Bhatti, A. S.
2017-10-01
Transverse momentum (pT) distributions of primary charged particles were compared to simulations using the Ultra Relativistic Quantum Molecular Dynamics (UrQMD) transport model and the HIJING 1.0 model in minimum bias p-Pb collisions at sNN = 5.02TeV in the pseudorapidity (η) regions: η < 0.3, 0.3 < η < 0.8 and 0.8 < η < 1.3 and in the transverse momentum range 0.5 < pT < 20GeV/c. The simulated distributions were then compared with the ALICE data and it was observed that UrQMD predicts systematically higher yields than HIJING 1.0. Both codes cannot describe the experimental data in the range of 0.5 < pT < 20GeV/c, though in the region of pT > 5GeV/c the model predictions are very close to the experimental results for particles with |η| < 0.3, 0.3 < η < 0.8. The ratio of the yield at forward pseudorapidity to that at |η| < 0.3 was also studied. It was observed that the predictions of the models depend on η. In the experiment there is no essential difference of yields for particles from the intervals of |η| < 0.3, 0.3 < η < 0.8 and 0.8 < η < 1.3. The differences are significant for the models where the ratios are systematically less than 1. This means that the results are not connected to a medium effect but reflect the Cronin effect. We are led to conclude that the codes cannot take into account satisfactorily the leading effect due to the asymmetric p-Pb fragmentation.
An anisotropic elastoplastic constitutive formulation generalised for orthotropic materials
NASA Astrophysics Data System (ADS)
Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.
2018-03-01
This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour that involves very high pressures and shockwaves in orthotropic materials using an anisotropic Hill's yield criterion by means of the evolving structural tensors. The yield surface of this hyperelastic-plastic constitutive model is aligned uniquely within the principal stress space due to the combination of Mandel stress tensor and a new generalised orthotropic pressure. The formulation is developed in the isoclinic configuration and allows for a unique treatment for elastic and plastic orthotropy. An isotropic hardening is adopted to define the evolution of plastic orthotropy. The important feature of the proposed hyperelastic-plastic constitutive model is the introduction of anisotropic effect in the Mie-Gruneisen equation of state (EOS). The formulation is further combined with Grady spall failure model to predict spall failure in the materials. The proposed constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The combination of the proposed stress tensor decomposition and the Mie-Gruneisen EOS requires some modifications in the code to reflect the formulation of the generalised orthotropic pressure. The validation approach is also presented in this paper for guidance purpose. The \\varvec{ψ} tensor used to define the alignment of the adopted yield surface is first validated. This is continued with an internal validation related to elastic isotropic, elastic orthotropic and elastic-plastic orthotropic of the proposed formulation before a comparison against range of plate impact test data at 234, 450 and {895 ms}^{-1} impact velocities is performed. A good agreement is obtained in each test.
Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N
2004-01-01
As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted cost per event and resulting CERs were remarkably close in value to one another. Although DRG codes are based on broader categories and contain less information than ICD-9 codes, in practice the choice of whether to use DRGs or ICD-9 codes may have little effect on the CEA results in heterogeneous conditions such as HIV/AIDS.
Modeling of hydrocarbon sputtering in Tore Supra
NASA Astrophysics Data System (ADS)
Hogan, J.; Gauthier, E.; Cambe, A.; Layet, J.-M.
2002-11-01
The use of carbon in fusion devices introduces problems of erosion and tritium retention which are related to chemical sputtering. The in-situ chemical sputtering yield of carbon has recently been measured in a well-diagnosed SOL plasma near the neutralizer plate in the Tore-Supra Outboard Pump Limiter. Methane and heavier hydrocarbon (C2DX and C3DY) emission has been measured in ohmic and lower hybrid heated discharges, using mass and optical molecular spectroscopy [1]. The Monte Carlo code BBQ has been used both to validate the method used to obtain the sputtering yields, and for direct comparison with available values reported for accelerator-based sputtering yields. A comparison with predicted surface temperature and particle flux dependence is also presented, for both CD4 and the heavier hydrocarbon yields. The particle flux dependence comparison is found to be complex, since changes in mean free path also accompany variation in particle flux. For the temperature dependence of methane erosion, the Roth annealing model is found to provide a better fit than the hydrogenation-moderated model. [1] A. Cambe, thesis, 2002; ORNL: Supported by U.S.DOE Contract DE-AC05-00OR22725
Update of the α - n Yields for Reactor Fuel Materials for the Interest of Nuclear Safeguards
NASA Astrophysics Data System (ADS)
Simakov, S. P.; van den Berg, Q. Y.
2017-01-01
The neutron yields caused by spontaneous α-decay of actinides and subsequent (α,xn) reactions were re-evaluated for the reactor fuel materials UO2, UF6, PuO2 and PuF4. For this purpose, the most recent reference data for decay parameters, α-particle stopping powers and (α,xn) cross sections were collected, analysed and used in calculations. The input data and elaborated code were validated against available thick target neutron yields in pure and compound materials measured at accelerators or with radioactive sources. This paper provides the specific neutron yields and their uncertainties resultant from α-decay of actinides 241Am, 249Bk, 252Cf, 242,244Cm, 237Np, 238-242Pu, 232Th and 232-236,238U in oxide and fluoride compounds. The obtained results are an update of previous reference tables issued by the Los Alamos National Laboratory in 1991 which were used for the safeguarding of radioactive materials by passive non-destructive techniques. The comparison of the updated values with previous ones shows an agreement within one estimated uncertainty (≈ 10%) for oxides, and deviations of up to 50% for fluorides.
Studies of fission fragment yields via high-resolution γ-ray spectroscopy
NASA Astrophysics Data System (ADS)
Wilson, J. N.; Lebois, M.; Qi, L.; Amador-Celdran, P.; Bleuel, D.; Briz, J. A.; Carroll, R.; Catford, W.; Witte, H. De; Doherty, D. T.; Eloirdi, R.; Georgiev, G.; Gottardo, A.; Goasduff, A.; Hadyñska-Klek, K.; Hauschild, K.; Hess, H.; Ingeberg, V.; Konstantinopoulos, T.; Ljungvall, J.; Lopez-Martens, A.; Lorusso, G.; Lozeva, R.; Lutter, R.; Marini, P.; Matea, I.; Materna, T.; Mathieu, L.; Oberstedt, A.; Oberstedt, S.; Panebianco, S.; Podolyak, Zs.; Porta, A.; Regan, P. H.; Reiter, P.; Rezynkina, K.; Rose, S. J.; Sahin, E.; Seidlitz, M.; Serot, O.; Shearman, R.; Siebeck, B.; Siem, S.; Smith, A. G.; Tveten, G. M.; Verney, D.; Warr, N.; Zeiser, F.; Zielinska, M.
2018-03-01
Precise spectroscopic information on the fast neutron induced fission of the 238U(n,f) reaction was recently gained using a new technique which involved coupling of the Miniball high resolution y-ray spectrometer and the LICORNE directional neutron source. The experiment allowed measurement of the isotopic fission yields for around 40 even-even nuclei at an incident neutron energy of around 2 MeV where yield data are very sparse. In addition spectroscopic information on very neutron-rich fission products was obtained. Results were compared to models, both the JEFF-3.1.1 data base and the GEF code, and large discrepancies for the S1 fission mode in the Sn/Mo isotope pair were discovered. This suggests that current models are overestimating the role played by spherical shell effects in fast neutron induced fission. In late 2017 and 2018 the nu-ball hybrid spectrometer will be constructed at the IPN Orsay to perform further experimental investigations with directional neutrons coupled to a powerful hybrid Ge/LaBr3 detector array. This will open up new possibilities for measurements of fission yields for fast-neutron-induced fission using the spectroscopic technique and will be complimentary to other methods being developed.
Tradeoffs between vigor and yield for crops grown under different management systems
NASA Astrophysics Data System (ADS)
Simic Milas, Anita; Keller Vincent, Robert; Romanko, Matthew; Feitl, Melina; Rupasinghe, Prabha
2016-04-01
Remote sensing can provide an effective means for rapid and non-destructive monitoring of crop status and biochemistry. Monitoring pattern of traditional vigor algorithms generated from Landsat 8 OLI satellite data represents a robust method that can be widely used to differentiate the status of crops, as well as to monitor nutrient uptake functionality of differently treated seeds grown under different managements. This study considers 24 factorial parcels of winter wheat in 2013, corn in 2014, and soybeans in 2015, grown under four different types of agricultural management. The parcels are located at the Kellogg Biological Station, Long-Term Ecological Research site in the State of Michigan USA. At maturity, the organic crops exhibit significantly higher vigor and significantly lower yield than conventionally managed crops under different treatments. While organic crops invest in their metabolism at the expense of their yield, the conventional crops manage to increase their yield at the expense of their vigor. Landsat 8 OLI is capable of 1) differentiating the biochemical status of crops under different treatments at maturity, and 2) monitoring the tradeoff between crop yield and vigor that can be controlled by the seed treatments and proper conventional applications, with the ultimate goal of increasing food yield and food availability, and 3) distinguishing between organic and conventionally treated crops. Timing, quantity and types of herbicide applications have a great impact on early and pre-harvest vigor, maturity and yield of conventionally treated crops. Satellite monitoring using Landsat 8 is an optimal tool for coordinating agricultural applications, soil practices and genetic coding of the crop to produce higher yield as well as have early crop maturity, desirable in northern climates.
Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K. Kirk
2013-01-01
The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than –60 dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20–60 MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16 dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50 μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12 dB lower range side lobe level and a 7 dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50 μm finer and –3 dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200 μm in diameter with 6 dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. PMID:22871273
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Neural network for image compression
NASA Astrophysics Data System (ADS)
Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.
1992-09-01
In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.
NASA Technical Reports Server (NTRS)
Lacey, J. C., Jr.; Stephens, D. P.; Fox, S. W.
1979-01-01
The formation of phase-separated microparticles following the mixing of solutions of homopolyribonucleotides with solutions of several basic thermal proteinoids, each rich in an individual amino acid, has been studied. Three of the 4 proteinoids studied yielded results consistent with a matrix of anticodonicity; the fourth did not. The meaning of these results, and others, relative to a postulated matrix for the genetic coding mechanism is discussed.
Progress Toward a Multidimensional Representation of the 5.56-mm Interior Ballistics
2009-08-01
were performed as a check of all the major species formed at one atmosphere pressure. Cheetah (17) thermodynamics calculations were performed under...in impermeable boundaries that only yield to gas-dynamic flow after a prescribed pressure load is reached act as rigid bodies within the chamber... Cheetah Code, version 4.0; Lawrence Livermore National Laboratory: Livermore, CA, 2005. 18. Williams, A. W.; Brant, A. L.; Kaste, P. J.; Colburn, J. W
Williams, M. L.; Wiarda, D.; Ilas, G.; ...
2014-06-15
Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.
A Generalized DBMS to Support Diversified Data.
1987-07-21
interest on bonds ). Hence. they require a definition of subtraction which yields 30 days as the answer to the above computation. Only a user-defined...STON85]. Alternately. one can follow the standard scheduler model [BERNS1] in which a module is callable by code in the access methods when a...direction for evolution . These could include when to cease investigating alternate plans. and the ability to specify one’s own optimizer parameters
Operational advances in ring current modeling using RAM-SCB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Daniel T; Jordanova, Vania K; Zaharia, Sorin G
The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determiningmore » the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. 'Virtual Satellite' capability has been added to yield satellite-specific particle distribution and magnetic field output. The code's outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.« less
The availability of web sites offering to sell opioid medications without prescriptions.
Forman, Robert F; Woody, George E; McLellan, Thomas; Lynch, Kevin G
2006-07-01
This study was designed to determine the availability of web sites offering to sell opioid medications without prescriptions. Forty-seven Internet searches were conducted with a variety of opioid medication terms, including "codeine," "no prescription Vicodin," and "OxyContin." Two independent raters examined the links generated in each search and resolved any coding disagreements. The resulting links were coded as "no prescription web sites" (NPWs) if they offered to sell opioid medications without prescriptions. In searches with terms such as "no prescription codeine" and "Vicodin," over 50% of the links obtained were coded as "NPWs." The proportion of links yielding NPWs was greater when the phrase "no prescription" was added to the opioid term. More than 300 opioid NPWs were identified and entered into a database. Three national drug-use monitoring studies have cited significant increases in prescription opioid use over the past 5 years, particularly among young people. The emergence of NPWs introduces a new vector for unregulated access to opioids. Research is needed to determine the effect of NPWs on prescription opioid use initiation, misuse, and dependence.
Fujisawa, Takatomo; Narikawa, Rei; Okamoto, Shinobu; Ehira, Shigeki; Yoshimura, Hidehisa; Suzuki, Iwane; Masuda, Tatsuru; Mochimaru, Mari; Takaichi, Shinichi; Awai, Koichiro; Sekine, Mitsuo; Horikawa, Hiroshi; Yashiro, Isao; Omata, Seiha; Takarada, Hiromi; Katano, Yoko; Kosugi, Hiroki; Tanikawa, Satoshi; Ohmori, Kazuko; Sato, Naoki; Ikeuchi, Masahiko; Fujita, Nobuyuki; Ohmori, Masayuki
2010-01-01
A filamentous non-N2-fixing cyanobacterium, Arthrospira (Spirulina) platensis, is an important organism for industrial applications and as a food supply. Almost the complete genome of A. platensis NIES-39 was determined in this study. The genome structure of A. platensis is estimated to be a single, circular chromosome of 6.8 Mb, based on optical mapping. Annotation of this 6.7 Mb sequence yielded 6630 protein-coding genes as well as two sets of rRNA genes and 40 tRNA genes. Of the protein-coding genes, 78% are similar to those of other organisms; the remaining 22% are currently unknown. A total 612 kb of the genome comprise group II introns, insertion sequences and some repetitive elements. Group I introns are located in a protein-coding region. Abundant restriction-modification systems were determined. Unique features in the gene composition were noted, particularly in a large number of genes for adenylate cyclase and haemolysin-like Ca2+-binding proteins and in chemotaxis proteins. Filament-specific genes were highlighted by comparative genomic analysis. PMID:20203057
Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.
Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain
2010-03-01
Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Measurement and prediction of model-rotor flow fields
NASA Technical Reports Server (NTRS)
Owen, F. K.; Tauber, M. E.
1985-01-01
This paper shows that a laser velocimeter can be used to measure accurately the three-component velocities induced by a model rotor at transonic tip speeds. The measurements, which were made at Mach numbers from 0.85 to 0.95 and at zero advance ratio, yielded high-resolution, orthogonal velocity values. The measured velocities were used to check the ability of the ROT22 full-potential rotor code to predict accurately the transonic flow field in the crucial region around and beyond the tip of a high-speed rotor blade. The good agreement between the calculated and measured velocities established the code's ability to predict the off-blade flow field at transonic tip speeds. This supplements previous comparisons in which surface pressures were shown to be well predicted on two different tips at advance ratios to 0.45, especially at the critical 90 deg azimuthal blade position. These results demonstrate that the ROT22 code can be used with confidence to predict the important tip-region flow field, including the occurrence, strength, and location of shock waves causing high drag and noise.
Tools for Designing and Analyzing Structures
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Structural Design and Analysis Toolset is a collection of approximately 26 Microsoft Excel spreadsheet programs, each of which performs calculations within a different subdiscipline of structural design and analysis. These programs present input and output data in user-friendly, menu-driven formats. Although these programs cannot solve complex cases like those treated by larger finite element codes, these programs do yield quick solutions to numerous common problems more rapidly than the finite element codes, thereby making it possible to quickly perform multiple preliminary analyses - e.g., to establish approximate limits prior to detailed analyses by the larger finite element codes. These programs perform different types of calculations, as follows: 1. determination of geometric properties for a variety of standard structural components; 2. analysis of static, vibrational, and thermal- gradient loads and deflections in certain structures (mostly beams and, in the case of thermal-gradients, mirrors); 3. kinetic energies of fans; 4. detailed analysis of stress and buckling in beams, plates, columns, and a variety of shell structures; and 5. temperature dependent properties of materials, including figures of merit that characterize strength, stiffness, and deformation response to thermal gradients
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Zucchelli, Silvia; Patrucco, Laura; Persichetti, Francesca; Gustincich, Stefano; Cotella, Diego
2016-01-01
Mammalian cells are an indispensable tool for the production of recombinant proteins in contexts where function depends on post-translational modifications. Among them, Chinese Hamster Ovary (CHO) cells are the primary factories for the production of therapeutic proteins, including monoclonal antibodies (MAbs). To improve expression and stability, several methodologies have been adopted, including methods based on media formulation, selective pressure and cell- or vector engineering. This review presents current approaches aimed at improving mammalian cell factories that are based on the enhancement of translation. Among well-established techniques (codon optimization and improvement of mRNA secondary structure), we describe SINEUPs, a family of antisense long non-coding RNAs that are able to increase translation of partially overlapping protein-coding mRNAs. By exploiting their modular structure, SINEUP molecules can be designed to target virtually any mRNA of interest, and thus to increase the production of secreted proteins. Thus, synthetic SINEUPs represent a new versatile tool to improve the production of secreted proteins in biomanufacturing processes.
Plasma viscosity with mass transport in spherical inertial confinement fusion implosion simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vold, E. L.; Molvig, K.; Joglekar, A. S.
2015-11-15
The effects of viscosity and small-scale atomic-level mixing on plasmas in inertial confinement fusion (ICF) currently represent challenges in ICF research. Many current ICF hydrodynamic codes ignore the effects of viscosity though recent research indicates viscosity and mixing by classical transport processes may have a substantial impact on implosion dynamics. We have implemented a Lagrangian hydrodynamic code in one-dimensional spherical geometry with plasma viscosity and mass transport and including a three temperature model for ions, electrons, and radiation treated in a gray radiation diffusion approximation. The code is used to study ICF implosion differences with and without plasma viscosity andmore » to determine the impacts of viscosity on temperature histories and neutron yield. It was found that plasma viscosity has substantial impacts on ICF shock dynamics characterized by shock burn timing, maximum burn temperatures, convergence ratio, and time history of neutron production rates. Plasma viscosity reduces the need for artificial viscosity to maintain numerical stability in the Lagrangian formulation and also modifies the flux-limiting needed for electron thermal conduction.« less
NASA Technical Reports Server (NTRS)
Boldman, D. R.; Iek, C.; Hwang, D. P.; Jeracki, R. J.; Larkin, M.; Sorin, G.
1991-01-01
An axisymmetric panel code was used to evaluate a series of ducted propeller inlets. The inlets were tested in the Lewis 9 by 15 Foot Low Speed Wind Tunnel. Three basic inlets having ratios of shroud length to propeller diameter of 0.2, 0.4, and 0.5 were tested with the Pratt and Whitney ducted prop/fan simulator. A fourth hybrid inlet consisting of the shroud from the shortest basic inlet coupled with the spinner from the largest basic inlet was also tested. This later configuration represented the shortest overall inlet. The simulator duct diameter at the propeller face was 17.25 inches. The short and long spinners provided hub-to-tip ratios of 0.44 at the propeller face. The four inlets were tested at a nominal free stream Mach number of 0.2 and at angles of attack from 0 degrees to 35 degrees. The panel code method incorporated a simple two-part separation model which yielded conservative estimates of inlet separation.
Wilke, Russell A; Berg, Richard L; Peissig, Peggy; Kitchner, Terrie; Sijercic, Bozana; McCarty, Catherine A; McCarty, Daniel J
2007-03-01
Diabetes mellitus is a rapidly increasing and costly public health problem. Large studies are needed to understand the complex gene-environment interactions that lead to diabetes and its complications. The Marshfield Clinic Personalized Medicine Research Project (PMRP) represents one of the largest population-based DNA biobanks in the United States. As part of an effort to begin phenotyping common diseases within the PMRP, we now report on the construction of a diabetes case-finding algorithm using electronic medical record data from adult subjects aged > or =50 years living in one of the target PMRP ZIP codes. Based upon diabetic diagnostic codes alone, we observed a false positive case rate ranging from 3.0% (in subjects with the highest glycosylated hemoglobin values) to 44.4% (in subjects with the lowest glycosylated hemoglobin values). We therefore developed an improved case finding algorithm that utilizes diabetic diagnostic codes in combination with clinical laboratory data and medication history. This algorithm yielded an estimated prevalence of 24.2% for diabetes mellitus in adult subjects aged > or =50 years.
Plasma viscosity with mass transport in spherical inertial confinement fusion implosion simulations
Vold, Erik Lehman; Joglekar, Archis S.; Ortega, Mario I.; ...
2015-11-20
The effects of viscosity and small-scale atomic-level mixing on plasmas in inertial confinement fusion(ICF) currently represent challenges in ICF research. Many current ICF hydrodynamic codes ignore the effects of viscosity though recent research indicates viscosity and mixing by classical transport processes may have a substantial impact on implosion dynamics. In this paper, we have implemented a Lagrangian hydrodynamic code in one-dimensional spherical geometry with plasmaviscosity and mass transport and including a three temperature model for ions, electrons, and radiation treated in a gray radiation diffusion approximation. The code is used to study ICF implosion differences with and without plasmaviscosity andmore » to determine the impacts of viscosity on temperature histories and neutron yield. It was found that plasmaviscosity has substantial impacts on ICF shock dynamics characterized by shock burn timing, maximum burn temperatures, convergence ratio, and time history of neutron production rates. Finally, plasmaviscosity reduces the need for artificial viscosity to maintain numerical stability in the Lagrangian formulation and also modifies the flux-limiting needed for electron thermal conduction.« less
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
Updated User's Guide for Sammy: Multilevel R-Matrix Fits to Neutron Data Using Bayes' Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Nancy M
2008-10-01
In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron-induced cross section data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has evolved to the point where it is now in use around the world for analysis of many different types of data. SAMMY is not limited to incident neutrons but can also be used for incident protons, alpha particles, or other charged particles; likewise, Coulomb exit hannels can be included. Corrections for a wide variety of experimental conditions are available in the code: Doppler and resolution broadening, multiple-scattering corrections formore » capture or reaction yields, normalizations and backgrounds, to name but a few. The fitting procedure is Bayes' method, and data and parameter covariance matrices are properly treated within the code. Pre- and post-processing capabilities are also available, including (but not limited to) connections with the Evaluated Nuclear Data Files. Though originally designed for use in the resolved resonance region, SAMMY also includes a treatment for data analysis in the unresolved resonance region.« less
NASA Astrophysics Data System (ADS)
Oyarzabal, Eider
Exit-angle resolved Mo atom sputtering yield under Xe ion bombardment and carbon atom and cluster (C2 and C3) sputtering yields under Xe, Kr, Ar, Ne and He ion bombardment from a plasma are measured for low incident energies (75--225 eV). An energy-resolved quadrupole mass spectrometer (QMS) is used to detect the fraction of un-scattered sputtered neutrals that become ionized in the plasma; the angular distribution is obtained by changing the angle between the target and the QMS aperture. A one-dimensional Monte Carlo code is used to simulate the interaction of the plasma and the sputtered particles between the sample and the QMS. The elastic scattering cross-sections of C, C2 and C3 with the different bombarding gas neutrals is obtained by varying the distance between the sample and the QMS and by performing a best fit of the simulation results to the experimental results. Because the results obtained with the QMS are relative, the Mo atom sputtering results are normalized to the existing data in the literature and the total sputtering yield for carbon (C+C 2+C3) for each bombarding gas is obtained from weight loss measurements. The absolute sputtering yield for C, C2 and C 3 is then calculated from the integration of the measured angular distribution, taking into account the scattering and ionization of the sputtered particles between the sample and the QMS. The angular sputtering distribution for Mo has a maximum at theta=60°, and this maximum becomes less pronounced as the incident ion energy increases. The results of the Monte Carlo TRIDYN code simulation for the angular distribution of Mo atoms sputtered by Xe bombardment are in agreement with the experiments. For carbon sputtering under-cosine angular distributions of the sputtered atoms and clusters for all the studied bombarding gases are also observed. The C, C2 and C3 sputtering yield data shows a clear decrease of the atom to cluster (C/C2 and C/C3) sputtering ratio as the incident ion mass increases, changing from a carbon atom preferential erosion for the lower incident ion masses (He, Ne and Ar) to a cluster preferential erosion for the higher incident ion masses (Kr and Xe).
Alotiby, M; Greguric, I; Kibédi, T; Lee, B Q; Roberts, M; Stuchbery, A E; Tee, Pi; Tornyi, T; Vos, M
2018-03-21
Auger electrons emitted after nuclear decay have potential application in targeted cancer therapy. For this purpose it is important to know the Auger electron yield per nuclear decay. In this work we describe a measurement of the ratio of the number of conversion electrons (emitted as part of the nuclear decay process) to the number of Auger electrons (emitted as part of the atomic relaxation process after the nuclear decay) for the case of 125 I. Results are compared with Monte-Carlo type simulations of the relaxation cascade using the BrIccEmis code. Our results indicate that for 125 I the calculations based on rates from the Evaluated Atomic Data Library underestimate the K Auger yields by 20%.
Alpha-induced reactions on selenium between 11 and 15 MeV
NASA Astrophysics Data System (ADS)
Fiebiger, Stefan; Slavkovská, Zuzana; Giesen, Ulrich; Göbel, Kathrin; Heftrich, Tanja; Heiske, Annett; Reifarth, René; Schmidt, Stefan; Sonnabend, Kerstin; Thomas, Benedikt; Weigand, Mario
2017-07-01
The production of 77,79,85,85m Kr and 77Br via the reaction Se(α ,x) was investigated between {E}α =11 and 15 MeV using the activation technique. The irradiation of natural selenium targets on aluminum backings was conducted at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany. The spectroscopic analysis of the reaction products was performed using a high-purity germanium detector located at PTB and a low energy photon spectrometer detector at the Goethe University Frankfurt, Germany. Thick-target yields were determined. The corresponding energy-dependent production cross sections of 77,79,85,85m Kr and 77Br were calculated from the thick-target yields. Good agreement between experimental data and theoretical predictions using the TALYS-1.6 code was found.
NASA Astrophysics Data System (ADS)
Alotiby, M.; Greguric, I.; Kibédi, T.; Lee, B. Q.; Roberts, M.; Stuchbery, A. E.; Tee, Pi; Tornyi, T.; Vos, M.
2018-03-01
Auger electrons emitted after nuclear decay have potential application in targeted cancer therapy. For this purpose it is important to know the Auger electron yield per nuclear decay. In this work we describe a measurement of the ratio of the number of conversion electrons (emitted as part of the nuclear decay process) to the number of Auger electrons (emitted as part of the atomic relaxation process after the nuclear decay) for the case of 125I. Results are compared with Monte-Carlo type simulations of the relaxation cascade using the BrIccEmis code. Our results indicate that for 125I the calculations based on rates from the Evaluated Atomic Data Library underestimate the K Auger yields by 20%.
Denburg, Michelle R.; Haynes, Kevin; Shults, Justine; Lewis, James D.; Leonard, Mary B.
2011-01-01
Purpose Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the U.K. represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. Methods A cross-sectional validation study was performed to identify codes that best define CKD stages 3–5. All subjects with at least one non-zero measure of serum creatinine after 1-Jan-2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects <18 years and the Modification of Diet in Renal Disease formula for subjects ≥18 years of age. CKD was defined as an eGFR <60 ml/min/1.73m2 on at least two occasions, more than 90 days apart. Results The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95% CI: 4.54, 4.58). A list of 45 Read codes was compiled which yielded a positive predictive value of 88.9% (95% CI: 88.7, 89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD stage 3. Conclusions The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. PMID:22020900
Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech
2011-01-01
Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.
Denburg, Michelle R; Haynes, Kevin; Shults, Justine; Lewis, James D; Leonard, Mary B
2011-11-01
Chronic kidney disease (CKD) is a prevalent and important outcome and covariate in pharmacoepidemiology. The Health Improvement Network (THIN) in the UK represents a unique resource for population-based studies of CKD. We compiled a valid list of Read codes to identify subjects with moderate to advanced CKD. A cross-sectional validation study was performed to identify codes that best define CKD Stages 3-5. All subjects with at least one non-zero measure of serum creatinine after 1 January 2002 were included. Estimated glomerular filtration rate (eGFR) was calculated according to the Schwartz formula for subjects aged < 18 years and the Modification of Diet in Renal Disease formula for subjects aged ≥ 18 years. CKD was defined as an eGFR <60 mL/minute/1.73 m² on at least two occasions, more than 90 days apart. The laboratory definition identified 230,426 subjects with CKD, for a period prevalence in 2008 of 4.56% (95%CI, 4.54-4.58). A list of 45 Read codes was compiled, which yielded a positive predictive value of 88.9% (95%CI, 88.7-89.1), sensitivity of 48.8%, negative predictive value of 86.5%, and specificity of 98.2%. Of the 11.1% of subjects with a code who did not meet the laboratory definition, 83.6% had at least one eGFR <60. The most commonly used code was for CKD Stage 3. The proposed list of codes can be used to accurately identify CKD when serum creatinine data are limited. The most sensitive approach for the detection of CKD is to use this list to supplement creatinine measures. Copyright © 2011 John Wiley & Sons, Ltd.
Dey, Avishek; Samanta, Milan Kumar; Gayen, Srimonta; Sen, Soumitra K.; Maiti, Mrinal K.
2016-01-01
Drought is one of the major limiting factors for productivity of crops including rice (Oryza sativa L.). Understanding the role of allelic variations of key regulatory genes involved in stress-tolerance is essential for developing an effective strategy to combat drought. The bZIP transcription factors play a crucial role in abiotic-stress adaptation in plants via abscisic acid (ABA) signaling pathway. The present study aimed to search for allelic polymorphism in the OsbZIP23 gene across selected drought-tolerant and drought-sensitive rice genotypes, and to characterize the new allele through overexpression (OE) and gene-silencing (RNAi). Analyses of the coding DNA sequence (CDS) of the cloned OsbZIP23 gene revealed single nucleotide polymorphism at four places and a 15-nucleotide deletion at one place. The single-copy OsbZIP23 gene is expressed at relatively higher level in leaf tissues of drought-tolerant genotypes, and its abundance is more in reproductive stage. Cloning and sequence analyses of the OsbZIP23-promoter from drought-tolerant O. rufipogon and drought-sensitive IR20 cultivar showed variation in the number of stress-responsive cis-elements and a 35-nucleotide deletion at 5’-UTR in IR20. Analysis of the GFP reporter gene function revealed that the promoter activity of O. rufipogon is comparatively higher than that of IR20. The overexpression of any of the two polymorphic forms (1083 bp and 1068 bp CDS) of OsbZIP23 improved drought tolerance and yield-related traits significantly by retaining higher content of cellular water, soluble sugar and proline; and exhibited decrease in membrane lipid peroxidation in comparison to RNAi lines and non-transgenic plants. The OE lines showed higher expression of target genes-OsRab16B, OsRab21 and OsLEA3-1 and increased ABA sensitivity; indicating that OsbZIP23 is a positive transcriptional-regulator of the ABA-signaling pathway. Taken together, the present study concludes that the enhanced gene expression rather than natural polymorphism in coding sequence of OsbZIP23 is accountable for improved drought tolerance and yield performance in rice genotypes. PMID:26959651
NASA Technical Reports Server (NTRS)
Howlett, James T.; Bland, Samuel R.
1987-01-01
A method is described for calculating unsteady transonic flow with viscous interaction by coupling a steady integral boundary-layer code with an unsteady, transonic, inviscid small-disturbance computer code in a quasi-steady fashion. Explicit coupling of the equations together with viscous -inviscid iterations at each time step yield converged solutions with computer times about double those required to obtain inviscid solutions. The accuracy and range of applicability of the method are investigated by applying it to four AGARD standard airfoils. The first-harmonic components of both the unsteady pressure distributions and the lift and moment coefficients have been calculated. Comparisons with inviscid calcualtions and experimental data are presented. The results demonstrate that accurate solutions for transonic flows with viscous effects can be obtained for flows involving moderate-strength shock waves.
Wong, Ngo Yin; Xing, Hang; Tan, Li Huey; Lu, Yi
2013-01-01
While much work has been devoted to nanoscale assembly of functional materials, selective reversible assembly of components in the nanoscale pattern at selective sites has received much less attention. Exerting such a reversible control of the assembly process will make it possible to fine-tune the functional properties of the assembly and to realize more complex designs. Herein, by taking advantage of different binding affinities of biotin and desthiobiotin toward streptavidin, we demonstrate selective and reversible decoration of DNA origami tiles with streptavidin, including revealing an encrypted Morse code “NANO” and reversible exchange of uppercase letter “I” with lowercase “i”. The yields of the conjugations are high (> 90%) and the process is reversible. We expect this versatile conjugation technique to be widely applicable with different nanomaterials and templates. PMID:23373425
Carbon Back Sputter Modeling for Hall Thruster Testing
NASA Technical Reports Server (NTRS)
Gilland, James H.; Williams, George J.; Burt, Jonathan M.; Yim, John T.
2016-01-01
In support of wear testing for the Hall Effect Rocket with Magnetic Shielding (HERMeS) program, the back sputter from a Hall effect thruster plume has been modeled for the NASA Glenn Research Centers Vacuum Facility 5. The predicted wear at a near-worst case condition of 600 V, 12.5 kW was found to be on the order of 3 4 mkhour in a fully carbon-lined chamber. A more detailed numerical monte carlo code was also modified to estimate back sputter for a detailed facility and pumping configuration. This code demonstrated similar back sputter rate distributions, but is not yet accurately modeling the magnitudes. The modeling has been benchmarked to recent HERMeS wear testing, using multiple microbalance measurements. These recent measurements have yielded values, on the order of 1.5- 2 microns/khour.
Design with high strength steel: A case of failure and its implications
NASA Astrophysics Data System (ADS)
Rahka, Klaus
1992-10-01
A recent proof test failure of a high strength steel pressure vessel is scrutinized. Apparent deficiencies in the procedures to account for elasto-plastic local strain are indicated for the applicable routine (code) strength calculations. Tentative guidance is given for the use of material tensile fracture strain and its strain state (plane strain) correction in fracture margin estimation. A hypothesis that the calculated local strain is comparable with a gauge length weighted tensile ductility for fracture to initiate at a notch root is given. A discussion about the actual implications of the failure case and the suggested remedy in the light of the ASME Boiler and Pressure Vessel Code section 3 and 8 is presented. Further needs for research and development are delineated. Possible yield and ductility related design limits and their use as material quality indices are discussed.
NASA Astrophysics Data System (ADS)
Van, Vinh; Bruckhuisen, Jonas; Stahl, Wolfgang; Ilyushin, Vadim; Nguyen, Ha Vinh Lam
2018-01-01
The microwave spectrum of 2,5-dimethylfuran was recorded using two pulsed molecular jet Fourier transform microwave spectrometers which cover the frequency range from 2 to 40 GHz. The internal rotations of two equivalent methyl tops with a barrier height of approximately 439.15 cm-1 introduce torsional splittings of all rotational transitions in the spectrum. For the spectral analysis, two different computer programs were applied and compared, the PAM-C2v-2tops code based on the principal axis method which treats several torsional states simultaneously, and the XIAM code based on the combined axis method, yielding accurate molecular parameters. The experimental work was supplemented by quantum chemical calculations. Two-dimensional potential energy surfaces depending on the torsional angles of both methyl groups were calculated and parametrized.
Structural response of existing spatial truss roof construction based on Cosserat rod theory
NASA Astrophysics Data System (ADS)
Miśkiewicz, Mikołaj
2018-04-01
Paper presents the application of the Cosserat rod theory and newly developed associated finite elements code as the tools that support in the expert-designing engineering practice. Mechanical principles of the 3D spatially curved rods, dynamics (statics) laws, principle of virtual work are discussed. Corresponding FEM approach with interpolation and accumulation techniques of state variables are shown that enable the formulation of the C0 Lagrangian rod elements with 6-degrees of freedom per node. Two test examples are shown proving the correctness and suitability of the proposed formulation. Next, the developed FEM code is applied to assess the structural response of the spatial truss roof of the "Olivia" Sports Arena Gdansk, Poland. The numerical results are compared with load test results. It is shown that the proposed FEM approach yields correct results.
NASA Astrophysics Data System (ADS)
Stoltz, Peter; Veitzer, Seth
2008-04-01
We present a new Web 2.0-based interface to physics routines for High Energy Density Physics applications. These routines include models for ion stopping power, sputtering, secondary electron yields and energies, impact ionization cross sections, and atomic radiated power. The Web 2.0 interface allows users to easily explore the results of the models before using the routines within other codes or to analyze experimental results. We discuss how we used various Web 2.0 tools, including the Python 2.5, Django, and the Yahoo User Interface library. Finally, we demonstrate the interface by showing as an example the stopping power algorithms researchers are currently using within the Hydra code to analyze warm, dense matter experiments underway at the Neutralized Drift Compression Experiment facility at Lawrence Berkeley National Laboratory.
Female Sex Offenders' Relationship Experiences
Lawson, Louanne
2010-01-01
Interventions for child sexual abusers should take into account their perspectives on the context of their offenses, but no descriptions of everyday life from the offender's point of view have been published. This study therefore explored female offenders' views of their strengths and challenges. Documented risk assessments of 20 female offenders were analyzed using inductive content analysis (Cavanagh, 1997; Priest, Roberts & Woods, 2002; Woods, Priest & Roberts, 2002). The Good Lives Model provided the initial coding framework and Atlas/ti software (Muhr, 1997) was used for simultaneous data collection and analysis. The content analysis yielded 999 coding decisions organized in three themes. The global theme was relationship experiences. Offenders described the quality of their relationship experiences, including their personal perspectives, intimate relationships and social lives. These descriptions have implications for treatment planning and future research with women who have molested children. PMID:18624098
Simulation of ion-temperature-gradient turbulence in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, B I; Dimits, A M; Kim, C
Results are presented from nonlinear gyrokinetic simulations of toroidal ion temperature gradient (ITG) turbulence and transport. The gyrokinetic simulations are found to yield values of the thermal diffusivity significantly lower than gyrofluid or IFS-PPPL-model predictions. A new phenomenon of nonlinear effective critical gradients larger than the linear instability threshold gradients is observed, and is associated with undamped flux-surface-averaged shear flows. The nonlinear gyrokineic codes have passed extensive validity tests which include comparison against independent linear calculations, a series of nonlinear convergence tests, and a comparison between two independent nonlinear gyrokinetic codes. Our most realistic simulations to date have actual reconstructedmore » equilibria from experiments and a model for dilution by impurity and beam ions. These simulations highlight the need for still more physics to be included in the simulations« less
NASA Technical Reports Server (NTRS)
Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.;
2004-01-01
Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes--such as FLUKA--yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy-1 Da-1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for the absorbed dose, due to their higher LET and thus higher biological effectiveness. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.; Zankl, M.
2004-01-01
Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes - such as FLUKA - yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy -1 Da -1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm 2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for the absorbed dose, due to their higher LET and thus higher biological effectiveness.
Growth, Yield and Fruit Quality of Grapevines under Organic and Biodynamic Management
Döring, Johanna; Frisch, Matthias; Tittmann, Susanne; Stoll, Manfred; Kauer, Randolf
2015-01-01
The main objective of this study was to determine growth, yield and fruit quality of grapevines under organic and biodynamic management in relation to integrated viticultural practices. Furthermore, the mechanisms for the observed changes in growth, yield and fruit quality were investigated by determining nutrient status, physiological performance of the plants and disease incidence on bunches in three consecutive growing seasons. A field trial (Vitis vinifera L. cv. Riesling) was set up at Hochschule Geisenheim University, Germany. The integrated treatment was managed according to the code of good practice. Organic and biodynamic plots were managed according to Regulation (EC) No 834/2007 and Regulation (EC) No 889/2008 and according to ECOVIN- and Demeter-Standards, respectively. The growth and yield of the grapevines differed strongly among the different management systems, whereas fruit quality was not affected by the management system. The organic and the biodynamic treatments showed significantly lower growth and yield in comparison to the integrated treatment. The physiological performance was significantly lower in the organic and the biodynamic systems, which may account for differences in growth and cluster weight and might therefore induce lower yields of the respective treatments. Soil management and fertilization strategy could be responsible factors for these changes. Yields of the organic and the biodynamic treatments partially decreased due to higher disease incidence of downy mildew. The organic and the biodynamic plant protection strategies that exclude the use of synthetic fungicides are likely to induce higher disease incidence and might partially account for differences in the nutrient status of vines under organic and biodynamic management. Use of the biodynamic preparations had little influence on vine growth and yield. Due to the investigation of important parameters that induce changes especially in growth and yield of grapevines under organic and biodynamic management the study can potentially provide guidance for defining more effective farming systems. PMID:26447762
Optimization of Aircraft Seat Cushion Fire Blocking Layers.
1983-03-01
function of cost and weight, and the costs of labor involved in assembling a ccmposite seat cushion. The same classes of high char yield polymers that are...SEAT LATER DESIGN REPORT NRBBNBsg$$$$$$NN$$R$$$$$ SEAT DESIGN NUMBER: 009 LAYER NAME CODE NO. S MANUFACTURER 5 COST FACTORS . LABOR ...72621, 9096.. 7SS43. 73757. 77147. DELTA COSTS 0. 8340. 2922. 1136. 4327. ACOSOS in Iho..aS Of dollars. COST SUIRNY REPORT Re ....... VONR3 MORFA
Effects of Loading and Doping on Iron-Based CO2 Hydrogenation Catalysts
2009-08-24
dopant had on the overall catalyst’s activity and production distribution. 24-08-2009 Memorandum Report Naval Research Laboratory, Code 6183 4555...approach in producing a greater yield of hydrocarbon (HC) products above methane. The use of traditional Fischer-Tropsch synthesis (FTS) cobalt ...previous work done by our group [14] it is apparent that direct hydrogenation of CO2 over a general Cobalt -based FTS catalyst (namely Co-Pt/Al2O3
Gollin, Gerald; Moores, Donald
2006-06-01
Some pediatric surgeons rarely document nonoperative services, believing that the reimbursement provided for such care is negligible. We evaluated the impact of comprehensive documentation and billing for nonoperative, pediatric surgical care. All bills submitted for inpatient, nonoperative care for 1 year were reviewed. Total receipts for documented admissions, consultations, critical care, and daily care were determined. The Evaluation and Management code billed for each service was recorded, and the total and average payments attributable to each Evaluation and Management code were calculated. Fifty-six percent of services were covered by Medicaid and 26% by a commercial insurer. There were 607 billed admission history and physical exams for which reimbursement totaled 43,493 dollars. Critical care services were provided to 49 patients and yielded 8964 dollars in payments. Six hundred thirty-nine inpatient consultations were performed with a reimbursement of 42,830 dollars. Daily care services were billed 1044 times and produced 71,579 dollars in payments. Overall reimbursement for documented, nonoperative services was 166,866 dollars. This represented 16.2% of total, noncontracted income for the practice. Despite a payer mix heavily weighted toward Medicaid, comprehensive documentation and billing for nonoperative services increased total, noncontracted reimbursement by almost 20% over what it would have been had only operative services been billed. The yield from properly documented, nonoperative care can be substantial.
What Scientific Applications can Benefit from Hardware Transactional Memory?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M; Bihari, B; Gyllenhaal, J
2012-06-04
Achieving efficient and correct synchronization of multiple threads is a difficult and error-prone task at small scale and, as we march towards extreme scale computing, will be even more challenging when the resulting application is supposed to utilize millions of cores efficiently. Transactional Memory (TM) is a promising technique to ease the burden on the programmer, but only recently has become available on commercial hardware in the new Blue Gene/Q system and hence the real benefit for realistic applications has not been studied, yet. This paper presents the first performance results of TM embedded into OpenMP on a prototype systemmore » of BG/Q and characterizes code properties that will likely lead to benefits when augmented with TM primitives. We first, study the influence of thread count, environment variables and memory layout on TM performance and identify code properties that will yield performance gains with TM. Second, we evaluate the combination of OpenMP with multiple synchronization primitives on top of MPI to determine suitable task to thread ratios per node. Finally, we condense our findings into a set of best practices. These are applied to a Monte Carlo Benchmark and a Smoothed Particle Hydrodynamics method. In both cases an optimized TM version, executed with 64 threads on one node, outperforms a simple TM implementation. MCB with optimized TM yields a speedup of 27.45 over baseline.« less
AMPX: a modular code system for generating coupled multigroup neutron-gamma libraries from ENDF/B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Lucius, J.L.; Petrie, L.M.
1976-03-01
AMPX is a modular system for producing coupled multigroup neutron-gamma cross section sets. Basic neutron and gamma cross-section data for AMPX are obtained from ENDF/B libraries. Most commonly used operations required to generate and collapse multigroup cross-section sets are provided in the system. AMPX is flexibly dimensioned; neutron group structures, and gamma group structures, and expansion orders to represent anisotropic processes are all arbitrary and limited only by available computer core and budget. The basic processes provided will (1) generate multigroup neutron cross sections; (2) generate multigroup gamma cross sections; (3) generate gamma yields for gamma-producing neutron interactions; (4) combinemore » neutron cross sections, gamma cross sections, and gamma yields into final ''coupled sets''; (5) perform one-dimensional discrete ordinates transport or diffusion theory calculations for neutrons and gammas and, on option, collapse the cross sections to a broad-group structure, using the one-dimensional results as weighting functions; (6) plot cross sections, on option, to facilitate the ''evaluation'' of a particular multigroup set of data; (7) update and maintain multigroup cross section libraries in such a manner as to make it not only easy to combine new data with previously processed data but also to do it in a single pass on the computer; and (8) output multigroup cross sections in convenient formats for other codes. (auth)« less
West, Devin M.; McCauley, Lindsay M.; Sorensen, Jeffrey S.; Jephson, Al R.
2016-01-01
The pneumocococcal urine antigen test increases specific microbiological diagnosis over conventional culture methods in pneumonia patients. Data are limited regarding its yield and effect on antibiotic prescribing among patients with community-onset pneumonia in clinical practice. We performed a secondary analysis of 2837 emergency department patients admitted to seven Utah hospitals over 2 years with international diagnostic codes version 9 codes and radiographic evidence of pneumonia. Mean age was 64.2 years, 47.2% were male and all-cause 30-day mortality was 9.6%. Urinary antigen testing was performed in 1110 (39%) patients yielding 134 (12%) positives. Intensive care unit patients were more likely to undergo testing, and have a positive result (15% versus 8.8% for ward patients; p<0.01). Patients with risk factors for healthcare-associated pneumonia had fewer urinary antigen tests performed, but 8.4% were positive. Physicians changed to targeted antibiotic therapy in 20 (15%) patients, de-escalated antibiotic therapy in 76 patients (57%). In 38 (28%) patients, antibiotics were not changed. Only one patient changed to targeted therapy suffered clinical relapse. Length of stay and mortality were lower in patients receiving targeted therapy. Pneumococcal urinary antigen testing is an inexpensive, noninvasive test that favourably influenced antibiotic prescribing in a “real world”, multi-hospital observational study. PMID:28053969
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S. L.
1998-08-25
Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less
Parental Perspectives of Communication at the End of Life at a Pediatric Oncology Institution.
Snaman, Jennifer M; Torres, Carlos; Duffy, Brian; Levine, Deena R; Gibson, Deborah V; Baker, Justin N
2016-03-01
The interaction of health care providers and hospital staff with patients and families at the end of life affects the parental grief experience. Both verbal and nonverbal communication are key components of this interaction. The study objective was to explore the communication between hospital staff members and patients and families at the time of patients' health decline near the end of life. Twelve bereaved parents participated in a focus group. Semantic content analysis was used to analyze the transcript. Parents' responses to the prompt about typical ways the medical team communicated yielded 109 codes, which were grouped into 12 themes. The most common themes were "patient inclusion" and "explanation of medical plan," both used in 17% of responses. Responses to the prompt about positive and negative aspects of communication generated 208 codes, yielding 15 different themes. The most common theme about positive communication was the "strong relationship between family and staff." The theme "variations in care with a negative impact" was used most frequently in describing negative communication. This study helps to identify techniques that should be used by clinicians as they work with children with cancer and their families, particularly including patients in treatment decisions, ongoing relationship building, communicating with caring and empathy, using an interdisciplinary team for additional support, and pairing bad news with a plan of action.
Comparison of turbulence models and CFD solution options for a plain pipe
NASA Astrophysics Data System (ADS)
Canli, Eyub; Ates, Ali; Bilir, Sefik
2018-06-01
Present paper is partly a declaration of state of a currently ongoing PhD work about turbulent flow in a thick walled pipe in order to analyze conjugate heat transfer. An ongoing effort on CFD investigation of this problem using cylindrical coordinates and dimensionless governing equations is identified alongside a literature review. The mentioned PhD work will be conducted using an in-house developed code. However it needs preliminary evaluation by means of commercial codes available in the field. Accordingly ANSYS CFD was utilized in order to evaluate mesh structure needs and asses the turbulence models and solution options in terms of computational power versus difference signification. Present work contains a literature survey, an arrangement of governing equations of the PhD work, CFD essentials of the preliminary analysis and findings about the mesh structure and solution options. Mesh element number was changed between 5,000 and 320,000. k-ɛ, k-ω, Spalart-Allmaras and Viscous-Laminar models were compared. Reynolds number was changed between 1,000 and 50,000. As it may be expected due to the literature, k-ɛ yields more favorable results near the pipe axis and k-ωyields more convenient results near the wall. However k-ɛ is found sufficient to give turbulent structures for a conjugate heat transfer problem in a thick walled plain pipe.
Electron emission from condensed phase material induced by fast protons.
Shinpaugh, J L; McLawhorn, R A; McLawhorn, S L; Carnes, K D; Dingfelder, M; Travia, A; Toburen, L H
2011-02-01
Monte Carlo track simulation has become an important tool in radiobiology. Monte Carlo transport codes commonly rely on elastic and inelastic electron scattering cross sections determined using theoretical methods supplemented with gas-phase data; experimental condensed phase data are often unavailable or infeasible. The largest uncertainties in the theoretical methods exist for low-energy electrons, which are important for simulating electron track ends. To test the reliability of these codes to deal with low-energy electron transport, yields of low-energy secondary electrons ejected from thin foils have been measured following passage of fast protons. Fast ions, where interaction cross sections are well known, provide the initial spectrum of low-energy electrons that subsequently undergo elastic and inelastic scattering in the material before exiting the foil surface and being detected. These data, measured as a function of the energy and angle of the emerging electrons, can provide tests of the physics of electron transport. Initial measurements from amorphous solid water frozen to a copper substrate indicated substantial disagreement with MC simulation, although questions remained because of target charging. More recent studies, using different freezing techniques, do not exhibit charging, but confirm the disagreement seen earlier between theory and experiment. One now has additional data on the absolute differential electron yields from copper, aluminum and gold, as well as for thin films of frozen hydrocarbons. Representative data are presented.
Electron emission from condensed phase material induced by fast protons†
Shinpaugh, J. L.; McLawhorn, R. A.; McLawhorn, S. L.; Carnes, K. D.; Dingfelder, M.; Travia, A.; Toburen, L. H.
2011-01-01
Monte Carlo track simulation has become an important tool in radiobiology. Monte Carlo transport codes commonly rely on elastic and inelastic electron scattering cross sections determined using theoretical methods supplemented with gas-phase data; experimental condensed phase data are often unavailable or infeasible. The largest uncertainties in the theoretical methods exist for low-energy electrons, which are important for simulating electron track ends. To test the reliability of these codes to deal with low-energy electron transport, yields of low-energy secondary electrons ejected from thin foils have been measured following passage of fast protons. Fast ions, where interaction cross sections are well known, provide the initial spectrum of low-energy electrons that subsequently undergo elastic and inelastic scattering in the material before exiting the foil surface and being detected. These data, measured as a function of the energy and angle of the emerging electrons, can provide tests of the physics of electron transport. Initial measurements from amorphous solid water frozen to a copper substrate indicated substantial disagreement with MC simulation, although questions remained because of target charging. More recent studies, using different freezing techniques, do not exhibit charging, but confirm the disagreement seen earlier between theory and experiment. One now has additional data on the absolute differential electron yields from copper, aluminum and gold, as well as for thin films of frozen hydrocarbons. Representative data are presented. PMID:21183539
Further screening of the rhodopsin gene in patients with autosomal dominant retinitis pigmentosa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaithinathan, R.; Berson, E.L.; Dryja, T.P.
Here the authors report 8 novel mutations and 8 previously reported mutations found from further analysis of the rhodopsin gene in a large set of additional patients with autosomal dominant retinitis pigmentosa. Leukocyte DNA was purified from 122 unrelated patients with autosomal dominant retinitis pigmentosa who were not included in previous analyses. The coding region and splice donor and acceptor sites of the rhodopsin gene were screened for mutations using single-strand conformation polymorphism analysis and direct genomic sequencing. They found 29 patients with varient bands that were due to mutations. Sequence analysis showed that 20 cases each had 1 ofmore » 9 previously published mutations: Pro23His, Thr58Arg, Gly89Asp, Pro171Leu, Glu181Lys, Pro347Leu, Phe45Leu, Arg135Trp, and Lys296Glu. In 9 other cases, they found 8 novel mutations. One was a 3-bp deletion (Cys264-del), and the rest were point mutations resulting in an altered amino acid: Gly51Arg (GGC [yields] CGC), Cys110Tyr (TCG [yields] TAC), Gly114Asp (GGC [yields] GAC), Ala164Glu (GCG [yields] GAG), Pro171Ser (CCA [yields] TCA), Val345Leu (GTG [yields] CTG), and Pro347Gln (CCG [yields] CAG). Each of these novel mutations was found in only one family except for Gly51Arg, which was found in two. In every family tested, the mutation cosegregated with the disease. However, in pedigree D865 only one affected member was available for analysis. About two-thirds of the mutations affect amino acids in transmembrane domains, yet only one-half of opsin's residues are in these regions. One-third of the mutations alter residues in the extracellular/intradiscal space, which includes only 25% of the protein.« less
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2009-01-01
For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994
Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk
2009-09-01
For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.
Jayasinghe, Sanjay; Macartney, Kristine
2013-01-30
Hospital discharge records and laboratory data have shown a substantial early impact from the rotavirus vaccination program that commenced in 2007 in Australia. However, these assessments are affected by the validity and reliability of hospital discharge coding and stool testing to measure the true incidence of hospitalised disease. The aim of this study was to assess the validity of these data sources for disease estimation, both before and after, vaccine introduction. All hospitalisations at a major paediatric centre in children aged <5 years from 2000 to 2009 containing acute gastroenteritis (AGE) ICD 10 AM diagnosis codes were linked to hospital laboratory stool testing data. The validity of the rotavirus-specific diagnosis code (A08.0) and the incidence of hospitalisations attributable to rotavirus by both direct estimation and with adjustments for non-testing and miscoding were calculated for pre- and post-vaccination periods. A laboratory record of stool testing was available for 36% of all AGE hospitalisations (n=4948) the rotavirus code had high specificity (98.4%; 95% CI, 97.5-99.1%) and positive predictive value (96.8%; 94.8-98.3%), and modest sensitivity (61.6%; 58-65.1%). Of all rotavirus test positive hospitalisations only a third had a rotavirus code. The estimated annual average number of rotavirus hospitalisations, following adjustment for non-testing and miscoding was 5- and 6-fold higher than identified, respectively, from testing and coding alone. Direct and adjusted estimates yielded similar percentage reductions in annual average rotavirus hospitalisations of over 65%. Due to the limited use of stool testing and poor sensitivity of the rotavirus-specific diagnosis code routine hospital discharge and laboratory data substantially underestimate the true incidence of rotavirus hospitalisations and absolute vaccine impact. However, this data can still be used to monitor vaccine impact as the effects of miscoding and under-testing appear to be comparable between pre and post vaccination periods. Copyright © 2012 Elsevier Ltd. All rights reserved.
Consequences of an unstable chemical stratification on mantle dynamics
NASA Astrophysics Data System (ADS)
Plesa, Ana-Catalina; Tosi, Nicola; Breuer, Doris
2013-04-01
Early in the history of terrestrial planets, the fractional crystallization of primordial magma oceans may have led to the formation of large scale chemical heterogeneities. These may have been preserved over the entire planetary evolution as suggested for Mars by the isotopic analysis of the so-called SNC meteorites. The fractional crystallization of a magma ocean leads to a chemical stratification characterized by a progressive enrichment in heavy elements from the core-mantle boundary to the surface. This results in an unstable configuration that causes the overturn of the mantle and the subsequent formation of a stable chemical layering. Assuming scaling parameters appropriate for Mars, we first performed simulations of 2D thermo-chemical convection in Cartesian geometry with the numerical code YACC [1]. We investigated systems heated either solely from below or from within by varying systematically the buoyancy ratio B, which measures the relative importance of chemical to thermal buoyancy, and the mantle rheology, by considering systems with constant, strongly temperature-dependent and plastic viscosity. We ran a large set of simulations spanning a wide parameter space in order to understand the basic physics governing the magma ocean cumulate overturn and its consequence on mantle dynamics. Moreover, we derived scaling laws that relate the time over which chemical heterogeneities can be preserved (mixing time) and the critical yield stress (maximal yield stress that allows the lithosphere to undergo brittle failure) to the buoyancy ratio. We have found that the mixing time increases exponentially with B, while the critical yield stress shows a linear dependence. We investigated then Mars' early thermo-chemical evolution using the code GAIA in a 2D cylindrical geometry [2] and assuming a detailed magma ocean crystallization sequence as obtained from geochemical modeling [3]. We used an initial composition profile adapted from [3], accounted for an exothermic phase transition between lower and upper mantle and assumed all radiogenic heat sources to be enriched during the freezing-phase of the magma ocean in the uppermost 50 km [4]. A stagnant lid forms rapidly because of the strong temperature dependence of the viscosity. This prevents the uppermost dense cumulates to sink, even when allowing for a plastic yielding mechanism. Below this dense stagnant lid, the mantle chemical gradient settles to a stable configuration. The convection pattern is dominated by small-scale structures, which are difficult to reconcile with the large-scale volcanic features observed over Mars' surface. Assuming that the stagnant lid will break, a stable density gradient is obtained, with the densest material and the entire amount of heat sources lying above the core-mantle-boundary. This leads to a strong overheating of the lowermost mantle, whose temperature increases to values that exceed the liquidus. Therefore a fractionated global and deep magma ocean is difficult to reconcile with observations. Different scenarios assuming, for instance, a hemispherical or shallow magma ocean will have to be considered. References [1] N. Tosi, D.A. Yuen and O. Dadek; EPSL (2010) (Yet Another Convection Code, https://code.google.com/p/yacc-convection/) [2] C. Huettig and K. Stemmer; PEPI (2008) [3] L.T. Elkins-Tanton, E.M. Parmentier and P.C. Hess; Meteoritic and Planetary Science (2003) [4] L.T. Elkins-Tanton, S.E. Zaranek, E.M. Parmentier and P.C. Hess; EPSL (2005)
Optimized pH method for DNA elution from buccal cells collected in Whatman FTA cards.
Lema, Carolina; Kohl-White, Kendra; Lewis, Laurie R; Dao, Dat D
2006-01-01
DNA is the most accessible biologic material for obtaining information from the human genome because of its molecular stability and its presence in every nucleated cell. Currently, single nucleotide polymorphism genotyping and DNA methylation are the main DNA-based approaches to deriving genomic and epigenomic disease biomarkers. Upon the discontinuation of the Schleicher & Schuell IsoCode product (Dassel, Germany), which was a treated paper system to elute DNA from several biologic sources for polymerase chain reaction (PCR) analysis, a high-yielding DNA elution method was imperative. We describe here an improved procedure of the not fully validated Whatman pH-based elution protocol. Our DNA elution procedure from buccal cells collected in Whatman FTA cards (Whatman Inc., Florham Park, NJ) yielded approximately 4 microg of DNA from a 6-mm FTA card punch and was successfully applied for HLA-DQB1 genotyping. The genotypes showed complete concordance with data obtained from blood of the same subjects. The achieved high DNA yield from buccal cells suggests a potential cost-effective tool for genomic and epigenomic disease biomarkers development.
Zhai, Yunbo; Chen, Hongmei; Xu, Bibo; Xiang, Bobin; Chen, Zhong; Li, Caiting; Zeng, Guangming
2014-05-01
The influence of sewage sludge-based activated carbons (SSAC) on sewage sludge liquefaction has been carried out at 350 and 400°C. SSAC increased the yield and energy density of bio-oil at 350°C. The metallic compounds were the catalytic factor of SSAC obtained at 550°C (SSAC-550), while carbon was the catalytic factor of SSAC obtained at 650°C. Liquefaction with SSAC redistributed the species of heavy metals in solid residue (SR). With the addition of SSAC, the risk of Cu, Zn and Pb decreased at 350°C, while at 400°C the risk of Cd, Cu, and Zn were decreased. Ecological risk index indicated that 400°C was preferable for the toxicity decrement of SR, while risk assessment code indicated that SR obtained at 350°C contained lower risk. Considering the bio-oil yield, liquefaction at 350°C with SSAC-550 was preferable. Copyright © 2014 Elsevier Ltd. All rights reserved.
Communication Studies of DMP and SMP Machines
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.