Unmixed fuel processors and methods for using the same
Kulkarni, Parag Prakash; Cui, Zhe
2010-08-24
Disclosed herein are unmixed fuel processors and methods for using the same. In one embodiment, an unmixed fuel processor comprises: an oxidation reactor comprising an oxidation portion and a gasifier, a CO.sub.2 acceptor reactor, and a regeneration reactor. The oxidation portion comprises an air inlet, effluent outlet, and an oxygen transfer material. The gasifier comprises a solid hydrocarbon fuel inlet, a solids outlet, and a syngas outlet. The CO.sub.2 acceptor reactor comprises a water inlet, a hydrogen outlet, and a CO.sub.2 sorbent, and is configured to receive syngas from the gasifier. The regeneration reactor comprises a water inlet and a CO.sub.2 stream outlet. The regeneration reactor is configured to receive spent CO.sub.2 adsorption material from the gasification reactor and to return regenerated CO.sub.2 adsorption material to the gasification reactor, and configured to receive oxidized oxygen transfer material from the oxidation reactor and to return reduced oxygen transfer material to the oxidation reactor.
Fuel-Flexible Gasification-Combustion Technology for Production of H2 and Sequestration-Ready CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Raul Subia
GE Global Research is developing an innovative energy technology for coal gasification with high efficiency and near-zero pollution. This Unmixed Fuel Processor (UFP) technology simultaneously converts coal, steam and air into three separate streams of hydrogen-rich gas, sequestration-ready CO{sub 2}, and high-temperature, high-pressure vitiated air to produce electricity in gas turbines. This is the draft final report for the first stage of the DOE-funded Vision 21 program. The UFP technology development program encompassed lab-, bench- and pilot-scale studies to demonstrate the UFP concept. Modeling and economic assessments were also key parts of this program. The chemical and mechanical feasibility weremore » established via lab and bench-scale testing, and a pilot plant was designed, constructed and operated, demonstrating the major UFP features. Experimental and preliminary modeling results showed that 80% H{sub 2} purity could be achieved, and that a UFP-based energy plant is projected to meet DOE efficiency targets. Future work will include additional pilot plant testing to optimize performance and reduce environmental, operability and combined cycle integration risks. Results obtained to date have confirmed that this technology has the potential to economically meet future efficiency and environmental performance goals.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-09
... fuel produced by transmix processors. These amendments will allow locomotive and marine diesel fuel produced by transmix processors to meet a maximum 500 parts per million (ppm) sulfur standard provided that... processors while having a neutral or net positive environmental impact. EPA is also amending the fuel marker...
Three-wheel air turbocompressor for PEM fuel cell systems
Rehg, Tim; Gee, Mark; Emerson, Terence P.; Ferrall, Joe; Sokolov, Pavel
2003-08-19
A fuel cell system comprises a compressor and a fuel processor downstream of the compressor. A fuel cell stack is in communication with the fuel processor and compressor. A combustor is downstream of the fuel cell stack. First and second turbines are downstream of the fuel processor and in parallel flow communication with one another. A distribution valve is in communication with the first and second turbines. The first and second turbines are mechanically engaged to the compressor. A bypass valve is intermediate the compressor and the second turbine, with the bypass valve enabling a compressed gas from the compressor to bypass the fuel processor.
Methanol tailgas combustor control method
Hart-Predmore, David J.; Pettit, William H.
2002-01-01
A method for controlling the power and temperature and fuel source of a combustor in a fuel cell apparatus to supply heat to a fuel processor where the combustor has dual fuel inlet streams including a first fuel stream, and a second fuel stream of anode effluent from the fuel cell and reformate from the fuel processor. In all operating modes, an enthalpy balance is determined by regulating the amount of the first and/or second fuel streams and the quantity of the first air flow stream to support fuel processor power requirements.
Method for operating a combustor in a fuel cell system
Chalfant, Robert W.; Clingerman, Bruce J.
2002-01-01
A method of operating a combustor to heat a fuel processor in a fuel cell system, in which the fuel processor generates a hydrogen-rich stream a portion of which is consumed in a fuel cell stack and a portion of which is discharged from the fuel cell stack and supplied to the combustor, and wherein first and second streams are supplied to the combustor, the first stream being a hydrocarbon fuel stream and the second stream consisting of said hydrogen-rich stream, the method comprising the steps of monitoring the temperature of the fuel processor; regulating the quantity of the first stream to the combustor according to the temperature of the fuel processor; and comparing said quantity of said first stream to a predetermined value or range of predetermined values.
NASA Astrophysics Data System (ADS)
Yang, Mei; Jiao, Fengjun; Li, Shulian; Li, Hengqiang; Chen, Guangwen
2015-08-01
A self-sustained, complete and miniaturized methanol fuel processor has been developed based on modular integration and microreactor technology. The fuel processor is comprised of one methanol oxidative reformer, one methanol combustor and one two-stage CO preferential oxidation unit. Microchannel heat exchanger is employed to recover heat from hot stream, miniaturize system size and thus achieve high energy utilization efficiency. By optimized thermal management and proper operation parameter control, the fuel processor can start up in 10 min at room temperature without external heating. A self-sustained state is achieved with H2 production rate of 0.99 Nm3 h-1 and extremely low CO content below 25 ppm. This amount of H2 is sufficient to supply a 1 kWe proton exchange membrane fuel cell. The corresponding thermal efficiency of whole processor is higher than 86%. The size and weight of the assembled reactors integrated with microchannel heat exchangers are 1.4 L and 5.3 kg, respectively, demonstrating a very compact construction of the fuel processor.
Fuel processors for fuel cell APU applications
NASA Astrophysics Data System (ADS)
Aicher, T.; Lenz, B.; Gschnell, F.; Groos, U.; Federici, F.; Caprile, L.; Parodi, L.
The conversion of liquid hydrocarbons to a hydrogen rich product gas is a central process step in fuel processors for auxiliary power units (APUs) for vehicles of all kinds. The selection of the reforming process depends on the fuel and the type of the fuel cell. For vehicle power trains, liquid hydrocarbons like gasoline, kerosene, and diesel are utilized and, therefore, they will also be the fuel for the respective APU systems. The fuel cells commonly envisioned for mobile APU applications are molten carbonate fuel cells (MCFC), solid oxide fuel cells (SOFC), and proton exchange membrane fuel cells (PEMFC). Since high-temperature fuel cells, e.g. MCFCs or SOFCs, can be supplied with a feed gas that contains carbon monoxide (CO) their fuel processor does not require reactors for CO reduction and removal. For PEMFCs on the other hand, CO concentrations in the feed gas must not exceed 50 ppm, better 20 ppm, which requires additional reactors downstream of the reforming reactor. This paper gives an overview of the current state of the fuel processor development for APU applications and APU system developments. Furthermore, it will present the latest developments at Fraunhofer ISE regarding fuel processors for high-temperature fuel cell APU systems on board of ships and aircrafts.
NASA Technical Reports Server (NTRS)
Voecks, G. E.
1985-01-01
In proposed fuel-cell system, methanol converted to hydrogen in two places. External fuel processor converts only part of methanol. Remaining methanol converted in fuel cell itself, in reaction at anode. As result, size of fuel processor reduced, system efficiency increased, and cost lowered.
Miniature Fuel Processors for Portable Fuel Cell Power Supplies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Jamie D.; Jones, Evan O.; Palo, Daniel R.
2003-06-02
Miniature and micro-scale fuel processors are discussed. The enabling technologies for these devices are the novel catalysts and the micro-technology-based designs. The novel catalyst allows for methanol reforming at high gas hourly space velocities of 50,000 hr-1 or higher, while maintaining a carbon monoxide levels at 1% or less. The micro-technology-based designs enable the devices to be extremely compact and lightweight. The miniature fuel processors can nominally provide between 25-50 watts equivalent of hydrogen which is ample for soldier or personal portable power supplies. The integrated processors have a volume less than 50 cm3, a mass less than 150 grams,more » and thermal efficiencies of up to 83%. With reasonable assumptions on fuel cell efficiencies, anode gas and water management, parasitic power loss, etc., the energy density was estimated at 1700 Whr/kg. The miniature processors have been demonstrated with a carbon monoxide clean-up method and a fuel cell stack. The micro-scale fuel processors have been designed to provide up to 0.3 watt equivalent of power with efficiencies over 20%. They have a volume of less than 0.25 cm3 and a mass of less than 1 gram.« less
Method for operating a combustor in a fuel cell system
Clingerman, Bruce J.; Mowery, Kenneth D.
2002-01-01
In one aspect, the invention provides a method of operating a combustor to heat a fuel processor to a desired temperature in a fuel cell system, wherein the fuel processor generates hydrogen (H.sub.2) from a hydrocarbon for reaction within a fuel cell to generate electricity. More particularly, the invention provides a method and select system design features which cooperate to provide a start up mode of operation and a smooth transition from start-up of the combustor and fuel processor to a running mode.
NASA Astrophysics Data System (ADS)
Echigo, Mitsuaki; Shinke, Norihisa; Takami, Susumu; Tabata, Takeshi
Natural gas fuel processors have been developed for 500 W and 1 kW class residential polymer electrolyte fuel cell (PEFC) systems. These fuel processors contain all the elements—desulfurizers, steam reformers, CO shift converters, CO preferential oxidation (PROX) reactors, steam generators, burners and heat exchangers—in one package. For the PROX reactor, a single-stage PROX process using a novel PROX catalyst was adopted. In the 1 kW class fuel processor, thermal efficiency of 83% at HHV was achieved at nominal output assuming a H 2 utilization rate in the cell stack of 76%. CO concentration below 1 ppm in the product gas was achieved even under the condition of [O 2]/[CO]=1.5 at the PROX reactor. The long-term durability of the fuel processor was demonstrated with almost no deterioration in thermal efficiency and CO concentration for 10,000 h, 1000 times start and stop cycles, 25,000 cycles of load change.
Self-sustained operation of a kW e-class kerosene-reforming processor for solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Yoon, Sangho; Bae, Joongmyeon; Kim, Sunyoung; Yoo, Young-Sung
In this paper, fuel-processing technologies are developed for application in residential power generation (RPG) in solid oxide fuel cells (SOFCs). Kerosene is selected as the fuel because of its high hydrogen density and because of the established infrastructure that already exists in South Korea. A kerosene fuel processor with two different reaction stages, autothermal reforming (ATR) and adsorptive desulfurization reactions, is developed for SOFC operations. ATR is suited to the reforming of liquid hydrocarbon fuels because oxygen-aided reactions can break the aromatics in the fuel and steam can suppress carbon deposition during the reforming reaction. ATR can also be implemented as a self-sustaining reactor due to the exothermicity of the reaction. The kW e self-sustained kerosene fuel processor, including the desulfurizer, operates for about 250 h in this study. This fuel processor does not require a heat exchanger between the ATR reactor and the desulfurizer or electric equipment for heat supply and fuel or water vaporization because a suitable temperature of the ATR reformate is reached for H 2S adsorption on the ZnO catalyst beds in desulfurizer. Although the CH 4 concentration in the reformate gas of the fuel processor is higher due to the lower temperature of ATR tail gas, SOFCs can directly use CH 4 as a fuel with the addition of sufficient steam feeds (H 2O/CH 4 ≥ 1.5), in contrast to low-temperature fuel cells. The reforming efficiency of the fuel processor is about 60%, and the desulfurizer removed H 2S to a sufficient level to allow for the operation of SOFCs.
Development of compact fuel processor for 2 kW class residential PEMFCs
NASA Astrophysics Data System (ADS)
Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai
Korea Institute of Energy Research (KIER) has been developing a novel fuel processing system to provide hydrogen rich gas to residential polymer electrolyte membrane fuel cells (PEMFCs) cogeneration system. For the effective design of a compact hydrogen production system, the unit processes of steam reforming, high and low temperature water gas shift, steam generator and internal heat exchangers are thermally and physically integrated into a packaged hardware system. Several prototypes are under development and the prototype I fuel processor showed thermal efficiency of 73% as a HHV basis with methane conversion of 81%. Recently tested prototype II has been shown the improved performance of thermal efficiency of 76% with methane conversion of 83%. In both prototypes, two-stage PrOx reactors reduce CO concentration less than 10 ppm, which is the prerequisite CO limit condition of product gas for the PEMFCs stack. After confirming the initial performance of prototype I fuel processor, it is coupled with PEMFC single cell to test the durability and demonstrated that the fuel processor is operated for 3 days successfully without any failure of fuel cell voltage. Prototype II fuel processor also showed stable performance during the durability test.
Compact propane fuel processor for auxiliary power unit application
NASA Astrophysics Data System (ADS)
Dokupil, M.; Spitta, C.; Mathiak, J.; Beckhaus, P.; Heinzel, A.
With focus on mobile applications a fuel cell auxiliary power unit (APU) using liquefied petroleum gas (LPG) is currently being developed at the Centre for Fuel Cell Technology (Zentrum für BrennstoffzellenTechnik, ZBT gGmbH). The system is consisting of an integrated compact and lightweight fuel processor and a low temperature PEM fuel cell for an electric power output of 300 W. This article is presenting the current status of development of the fuel processor which is designed for a nominal hydrogen output of 1 k Wth,H2 within a load range from 50 to 120%. A modular setup was chosen defining a reformer/burner module and a CO-purification module. Based on the performance specifications, thermodynamic simulations, benchmarking and selection of catalysts the modules have been developed and characterised simultaneously and then assembled to the complete fuel processor. Automated operation results in a cold startup time of about 25 min for nominal load and carbon monoxide output concentrations below 50 ppm for steady state and dynamic operation. Also fast transient response of the fuel processor at load changes with low fluctuations of the reformate gas composition have been achieved. Beside the development of the main reactors the transfer of the fuel processor to an autonomous system is of major concern. Hence, concepts for packaging have been developed resulting in a volume of 7 l and a weight of 3 kg. Further a selection of peripheral components has been tested and evaluated regarding to the substitution of the laboratory equipment.
Control apparatus and method for efficiently heating a fuel processor in a fuel cell system
Doan, Tien M.; Clingerman, Bruce J.
2003-08-05
A control apparatus and method for efficiently controlling the amount of heat generated by a fuel cell processor in a fuel cell system by determining a temperature error between actual and desired fuel processor temperatures. The temperature error is converted to a combustor fuel injector command signal or a heat dump valve position command signal depending upon the type of temperature error. Logic controls are responsive to the combustor fuel injector command signals and the heat dump valve position command signal to prevent the combustor fuel injector command signal from being generated if the heat dump valve is opened or, alternately, from preventing the heat dump valve position command signal from being generated if the combustor fuel injector is opened.
Ahmed, Shabbir; Papadias, Dionissios D.; Lee, Sheldon H. D.; Ahluwalia, Rajesh K.
2013-01-08
The invention provides a fuel processor comprising a linear flow structure having an upstream portion and a downstream portion; a first catalyst supported at the upstream portion; and a second catalyst supported at the downstream portion, wherein the first catalyst is in fluid communication with the second catalyst. Also provided is a method for reforming fuel, the method comprising contacting the fuel to an oxidation catalyst so as to partially oxidize the fuel and generate heat; warming incoming fuel with the heat while simultaneously warming a reforming catalyst with the heat; and reacting the partially oxidized fuel with steam using the reforming catalyst.
Design of an integrated fuel processor for residential PEMFCs applications
NASA Astrophysics Data System (ADS)
Seo, Yu Taek; Seo, Dong Joo; Jeong, Jin Hyeok; Yoon, Wang Lai
KIER has been developing a novel fuel processing system to provide hydrogen rich gas to residential PEMFCs system. For the effective design of a compact hydrogen production system, each unit process for steam reforming and water gas shift, has a steam generator and internal heat exchangers which are thermally and physically integrated into a single packaged hardware system. The newly designed fuel processor (prototype II) showed a thermal efficiency of 78% as a HHV basis with methane conversion of 89%. The preferential oxidation unit with two staged cascade reactors, reduces, the CO concentration to below 10 ppm without complicated temperature control hardware, which is the prerequisite CO limit for the PEMFC stack. After we achieve the initial performance of the fuel processor, partial load operation was carried out to test the performance and reliability of the fuel processor at various loads. The stability of the fuel processor was also demonstrated for three successive days with a stable composition of product gas and thermal efficiency. The CO concentration remained below 10 ppm during the test period and confirmed the stable performance of the two-stage PrOx reactors.
Method for fast start of a fuel processor
Ahluwalia, Rajesh K [Burr Ridge, IL; Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL
2008-01-29
An improved fuel processor for fuel cells is provided whereby the startup time of the processor is less than sixty seconds and can be as low as 30 seconds, if not less. A rapid startup time is achieved by either igniting or allowing a small mixture of air and fuel to react over and warm up the catalyst of an autothermal reformer (ATR). The ATR then produces combustible gases to be subsequently oxidized on and simultaneously warm up water-gas shift zone catalysts. After normal operating temperature has been achieved, the proportion of air included with the fuel is greatly diminished.
Proton exchange membrane fuel cell technology for transportation applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swathirajan, S.
1996-04-01
Proton Exchange Membrane (PEM) fuel cells are extremely promising as future power plants in the transportation sector to achieve an increase in energy efficiency and eliminate environmental pollution due to vehicles. GM is currently involved in a multiphase program with the US Department of Energy for developing a proof-of-concept hybrid vehicle based on a PEM fuel cell power plant and a methanol fuel processor. Other participants in the program are Los Alamos National Labs, Dow Chemical Co., Ballard Power Systems and DuPont Co., In the just completed phase 1 of the program, a 10 kW PEM fuel cell power plantmore » was built and tested to demonstrate the feasibility of integrating a methanol fuel processor with a PEM fuel cell stack. However, the fuel cell power plant must overcome stiff technical and economic challenges before it can be commercialized for light duty vehicle applications. Progress achieved in phase I on the use of monolithic catalyst reactors in the fuel processor, managing CO impurity in the fuel cell stack, low-cost electrode-membrane assembles, and on the integration of the fuel processor with a Ballard PEM fuel cell stack will be presented.« less
Matawle, Jeevan Lal; Pervez, Shamsh; Deb, Manas Kanti; Shrivastava, Anjali; Tiwari, Suresh
2018-02-01
USEPA's UNMIX, positive matrix factorization (PMF) and effective variance-chemical mass balance (EV-CMB) receptor models were applied to chemically speciated profiles of 125 indoor PM 2.5 measurements, sampled longitudinally during 2012-2013 in low-income group households of Central India which uses solid fuels for cooking practices. Three step source apportionment studies were carried out to generate more confident source characterization. Firstly, UNMIX6.0 extracted initial number of source factors, which were used to execute PMF5.0 to extract source-factor profiles in second step. Finally, factor analog locally derived source profiles were supplemented to EV-CMB8.2 with indoor receptor PM 2.5 chemical profile to evaluate source contribution estimates (SCEs). The results of combined use of three receptor models clearly describe that UNMIX and PMF are useful tool to extract types of source categories within small receptor dataset and EV-CMB can pick those locally derived source profiles for source apportionment which are analog to PMF-extracted source categories. The source apportionment results have also shown three fold higher relative contribution of solid fuel burning emissions to indoor PM 2.5 compared to those measurements reported for normal households with LPG stoves. The previously reported influential source marker species were found to be comparatively similar to those extracted from PMF fingerprint plots. The comparison between PMF and CMB SCEs results were also found to be qualitatively similar. The performance fit measures of all three receptor models were cross-verified and validated and support each other to gain confidence in source apportionment results.
A light hydrocarbon fuel processor producing high-purity hydrogen
NASA Astrophysics Data System (ADS)
Löffler, Daniel G.; Taylor, Kyle; Mason, Dylan
This paper discusses the design process and presents performance data for a dual fuel (natural gas and LPG) fuel processor for PEM fuel cells delivering between 2 and 8 kW electric power in stationary applications. The fuel processor resulted from a series of design compromises made to address different design constraints. First, the product quality was selected; then, the unit operations needed to achieve that product quality were chosen from the pool of available technologies. Next, the specific equipment needed for each unit operation was selected. Finally, the unit operations were thermally integrated to achieve high thermal efficiency. Early in the design process, it was decided that the fuel processor would deliver high-purity hydrogen. Hydrogen can be separated from other gases by pressure-driven processes based on either selective adsorption or permeation. The pressure requirement made steam reforming (SR) the preferred reforming technology because it does not require compression of combustion air; therefore, steam reforming is more efficient in a high-pressure fuel processor than alternative technologies like autothermal reforming (ATR) or partial oxidation (POX), where the combustion occurs at the pressure of the process stream. A low-temperature pre-reformer reactor is needed upstream of a steam reformer to suppress coke formation; yet, low temperatures facilitate the formation of metal sulfides that deactivate the catalyst. For this reason, a desulfurization unit is needed upstream of the pre-reformer. Hydrogen separation was implemented using a palladium alloy membrane. Packed beds were chosen for the pre-reformer and reformer reactors primarily because of their low cost, relatively simple operation and low maintenance. Commercial, off-the-shelf balance of plant (BOP) components (pumps, valves, and heat exchangers) were used to integrate the unit operations. The fuel processor delivers up to 100 slm hydrogen >99.9% pure with <1 ppm CO, <3 ppm CO 2. The thermal efficiency is better than 67% operating at full load. This fuel processor has been integrated with a 5-kW fuel cell producing electricity and hot water.
Metal membrane-type 25-kW methanol fuel processor for fuel-cell hybrid vehicle
NASA Astrophysics Data System (ADS)
Han, Jaesung; Lee, Seok-Min; Chang, Hyuksang
A 25-kW on-board methanol fuel processor has been developed. It consists of a methanol steam reformer, which converts methanol to hydrogen-rich gas mixture, and two metal membrane modules, which clean-up the gas mixture to high-purity hydrogen. It produces hydrogen at rates up to 25 N m 3/h and the purity of the product hydrogen is over 99.9995% with a CO content of less than 1 ppm. In this fuel processor, the operating condition of the reformer and the metal membrane modules is nearly the same, so that operation is simple and the overall system construction is compact by eliminating the extensive temperature control of the intermediate gas streams. The recovery of hydrogen in the metal membrane units is maintained at 70-75% by the control of the pressure in the system, and the remaining 25-30% hydrogen is recycled to a catalytic combustion zone to supply heat for the methanol steam-reforming reaction. The thermal efficiency of the fuel processor is about 75% and the inlet air pressure is as low as 4 psi. The fuel processor is currently being integrated with 25-kW polymer electrolyte membrane fuel-cell (PEMFC) stack developed by the Hyundai Motor Company. The stack exhibits the same performance as those with pure hydrogen, which proves that the maximum power output as well as the minimum stack degradation is possible with this fuel processor. This fuel-cell 'engine' is to be installed in a hybrid passenger vehicle for road testing.
A natural-gas fuel processor for a residential fuel cell system
NASA Astrophysics Data System (ADS)
Adachi, H.; Ahmed, S.; Lee, S. H. D.; Papadias, D.; Ahluwalia, R. K.; Bendert, J. C.; Kanner, S. A.; Yamazaki, Y.
A system model was used to develop an autothermal reforming fuel processor to meet the targets of 80% efficiency (higher heating value) and start-up energy consumption of less than 500 kJ when operated as part of a 1-kWe natural-gas fueled fuel cell system for cogeneration of heat and power. The key catalytic reactors of the fuel processor - namely the autothermal reformer, a two-stage water gas shift reactor and a preferential oxidation reactor - were configured and tested in a breadboard apparatus. Experimental results demonstrated a reformate containing ∼48% hydrogen (on a dry basis and with pure methane as fuel) and less than 5 ppm CO. The effects of steam-to-carbon and part load operations were explored.
Diesel fuel to dc power: Navy & Marine Corps Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloomfield, D.P.
1996-12-31
During the past year Analytic Power has tested fuel cell stacks and diesel fuel processors for US Navy and Marine Corps applications. The units are 10 kW demonstration power plants. The USN power plant was built to demonstrate the feasibility of diesel fueled PEM fuel cell power plants for 250 kW and 2.5 MW shipboard power systems. We designed and tested a ten cell, 1 kW USMC substack and fuel processor. The complete 10 kW prototype power plant, which has application to both power and hydrogen generation, is now under construction. The USN and USMC fuel cell stacks have beenmore » tested on both actual and simulated reformate. Analytic Power has accumulated operating experience with autothermal reforming based fuel processors operating on sulfur bearing diesel fuel, jet fuel, propane and natural gas. We have also completed the design and fabrication of an advanced regenerative ATR for the USMC. One of the significant problems with small fuel processors is heat loss which limits its ability to operate with the high steam to carbon ratios required for coke free high efficiency operation. The new USMC unit specifically addresses these heat transfer issues. The advances in the mill programs have been incorporated into Analytic Power`s commercial units which are now under test.« less
NASA Astrophysics Data System (ADS)
Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.; Guzman-Leong, Consuelo E.; Wang, Yong; Hu, Jianli; Chin, Ya-Huei; Dagle, Robert A.; Baker, Eddie G.
A 15-W e portable power system is being developed for the US Army that consists of a hydrogen-generating fuel reformer coupled to a proton-exchange membrane fuel cell. In the first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14-80 W t output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 W e, the system yielded a fuel processor efficiency of 45% (LHV of H 2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 Wh/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified, and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, a fuel cell, and a rechargeable battery. The battery will provide power for start-up and added capacity for times of peak power demand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palo, Daniel R.; Holladay, Jamelyn D.; Rozmiarek, Robert T.
A 15-We portable power system is being developed for the US Army, comprised of a hydrogen-generating fuel reformer coupled to a hydrogen-converting fuel cell. As a first phase of this project, a methanol steam reformer system was developed and demonstrated. The reformer system included a combustor, two vaporizers, and a steam-reforming reactor. The device was demonstrated as a thermally independent unit over the range of 14 to 80 Wt output. Assuming a 14-day mission life and an ultimate 1-kg fuel processor/fuel cell assembly, a base case was chosen to illustrate the expected system performance. Operating at 13 We, the systemmore » yielded a fuel processor efficiency of 45% (LHV of H2 out/LHV of fuel in) and an estimated net efficiency of 22% (assuming a fuel cell efficiency of 48%). The resulting energy density of 720 W-hr/kg is several times the energy density of the best lithium-ion batteries. Some immediate areas of improvement in thermal management also have been identified and an integrated fuel processor is under development. The final system will be a hybrid, containing a fuel reformer, fuel cell, and rechargeable battery. The battery will provide power for startup and added capacity for times of peak power demand.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
...EPA is amending the requirements under EPA's diesel sulfur program related to the sulfur content of locomotive and marine (LM) diesel fuel produced by transmix processors and pipeline facilities. These amendments will reinstate the ability of locomotive and marine diesel fuel produced from transmix by transmix processors and pipeline operators to meet a maximum 500 parts per million (ppm) sulfur standard outside of the Northeast Mid-Atlantic Area and Alaska and expand this ability to within the Northeast Mid-Atlantic Area provided that: the fuel is used in older technology locomotive and marine engines that do not require 15 ppm sulfur diesel fuel, and the fuel is kept segregated from other fuel. These amendments will provide significant regulatory relief for transmix processors and pipeline operators to allow the petroleum distribution system to function efficiently while continuing to transition the market to virtually all ultra-low sulfur diesel fuel (ULSD, i.e. 15 ppm sulfur diesel fuel) and the environmental benefits it provides.
Fuel Processor Development for a Soldier-Portable Fuel Cell System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palo, Daniel R.; Holladay, Jamie D.; Rozmiarek, Robert T.
2002-01-01
Battelle is currently developing a soldier-portable power system for the U.S. Army that will continuously provide 15 W (25 W peak) of base load electric power for weeks or months using a micro technology-based fuel processor. The fuel processing train consists of a combustor, two vaporizers, and a steam-reforming reactor. This paper describes the concept and experimental progress to date.
Dynamic behavior of gasoline fuel cell electric vehicles
NASA Astrophysics Data System (ADS)
Mitchell, William; Bowers, Brian J.; Garnier, Christophe; Boudjemaa, Fabien
As we begin the 21st century, society is continuing efforts towards finding clean power sources and alternative forms of energy. In the automotive sector, reduction of pollutants and greenhouse gas emissions from the power plant is one of the main objectives of car manufacturers and innovative technologies are under active consideration to achieve this goal. One technology that has been proposed and vigorously pursued in the past decade is the proton exchange membrane (PEM) fuel cell, an electrochemical device that reacts hydrogen with oxygen to produce water, electricity and heat. Since today there is no existing extensive hydrogen infrastructure and no commercially viable hydrogen storage technology for vehicles, there is a continuing debate as to how the hydrogen for these advanced vehicles will be supplied. In order to circumvent the above issues, power systems based on PEM fuel cells can employ an on-board fuel processor that has the ability to convert conventional fuels such as gasoline into hydrogen for the fuel cell. This option could thereby remove the fuel infrastructure and storage issues. However, for these fuel processor/fuel cell vehicles to be commercially successful, issues such as start time and transient response must be addressed. This paper discusses the role of transient response of the fuel processor power plant and how it relates to the battery sizing for a gasoline fuel cell vehicle. In addition, results of fuel processor testing from a current Renault/Nuvera Fuel Cells project are presented to show the progress in transient performance.
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Energy and Environmental Research Corporation (GE EER) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEmore » EER (prime contractor) was awarded a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE EER, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling work, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the tenth quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting January 1, 2003 and ending March 31, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale assembly, and program management.« less
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Energy and Environmental Research Corporation (GE EER) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEmore » EER was awarded a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE EER, California Energy Commission, Southern Illinois University at Carbondale, and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling work, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the ninth quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting October 1, 2002 and ending December 31, 2002. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab- and bench-scale experimental testing, pilot-scale design and assembly, and program management.« less
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research (GEGR) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEGR (prime contractor) was awardedmore » a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GEGR, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling with best-case scenario assumptions, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the eleventh quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting April 1, 2003 and ending June 30, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale assembly, and program management.« less
Simulation of a 250 kW diesel fuel processor/PEM fuel cell system
NASA Astrophysics Data System (ADS)
Amphlett, J. C.; Mann, R. F.; Peppley, B. A.; Roberge, P. R.; Rodrigues, A.; Salvador, J. P.
Polymer-electrolyte membrane (PEM) fuel cell systems offer a potential power source for utility and mobile applications. Practical fuel cell systems use fuel processors for the production of hydrogen-rich gas. Liquid fuels, such as diesel or other related fuels, are attractive options as feeds to a fuel processor. The generation of hydrogen gas for fuel cells, in most cases, becomes the crucial design issue with respect to weight and volume in these applications. Furthermore, these systems will require a gas clean-up system to insure that the fuel quality meets the demands of the cell anode. The endothermic nature of the reformer will have a significant affect on the overall system efficiency. The gas clean-up system may also significantly effect the overall heat balance. To optimize the performance of this integrated system, therefore, waste heat must be used effectively. Previously, we have concentrated on catalytic methanol-steam reforming. A model of a methanol steam reformer has been previously developed and has been used as the basis for a new, higher temperature model for liquid hydrocarbon fuels. Similarly, our fuel cell evaluation program previously led to the development of a steady-state electrochemical fuel cell model (SSEM). The hydrocarbon fuel processor model and the SSEM have now been incorporated in the development of a process simulation of a 250 kW diesel-fueled reformer/fuel cell system using a process simulator. The performance of this system has been investigated for a variety of operating conditions and a preliminary assessment of thermal integration issues has been carried out. This study demonstrates the application of a process simulation model as a design analysis tool for the development of a 250 kW fuel cell system.
Coupling of a 2.5 kW steam reformer with a 1 kW el PEM fuel cell
NASA Astrophysics Data System (ADS)
Mathiak, J.; Heinzel, A.; Roes, J.; Kalk, Th.; Kraus, H.; Brandt, H.
The University of Duisburg-Essen has developed a compact multi-fuel steam reformer suitable for natural gas, propane and butane. This steam reformer was combined with a polymer electrolyte membrane fuel cell (PEM FC) and a system test of the process chain was performed. The fuel processor comprises a prereformer step, a primary reformer, water gas shift reactors, a steam generator, internal heat exchangers in order to achieve an optimised heat integration and an external burner for heat supply as well as a preferential oxidation step (PROX) as CO purification. The fuel processor is designed to deliver a thermal hydrogen power output from 500 W to 2.5 kW. The PEM fuel cell stack provides about 1 kW electrical power. In the following paper experimental results of measurements of the single components PEM fuel cell and fuel processor as well as results of the coupling of both to form a process chain are presented.
Increasing the electric efficiency of a fuel cell system by recirculating the anodic offgas
NASA Astrophysics Data System (ADS)
Heinzel, A.; Roes, J.; Brandt, H.
The University of Duisburg-Essen and the Center for Fuel Cell Technology (ZBT Duisburg GmbH) have developed a compact multi-fuel steam reformer suitable for natural gas, propane and butane. Fuel processor prototypes based on this concept were built up in the power range from 2.5 to 12.5 kW thermal hydrogen power for different applications and different industrial partners. The fuel processor concept contains all the necessary elements, a prereformer step, a primary reformer, water gas shift reactors, a steam generator, internal heat exchangers, in order to achieve an optimised heat integration and an external burner for heat supply as well as a preferential oxidation step (PrOx) as CO purification. One of the built fuel processors is designed to deliver a thermal hydrogen power output of 2.5 kW according to a PEM fuel cell stack providing about 1 kW electrical power and achieves a thermal efficiency of about 75% (LHV basis after PrOx), while the CO content of the product gas is below 20 ppm. This steam reformer has been combined with a 1 kW PEM fuel cell. Recirculating the anodic offgas results in a significant efficiency increase for the fuel processor. The gross efficiency of the combined system was already clearly above 30% during the first tests. Further improvements are currently investigated and developed at the ZBT.
UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario
Porada, Eugeniusz; Szyszkowicz, Mieczysław
2016-01-01
UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416
Configuring a fuel cell based residential combined heat and power system
NASA Astrophysics Data System (ADS)
Ahmed, Shabbir; Papadias, Dionissios D.; Ahluwalia, Rajesh K.
2013-11-01
The design and performance of a fuel cell based residential combined heat and power (CHP) system operating on natural gas has been analyzed. The natural gas is first converted to a hydrogen-rich reformate in a steam reformer based fuel processor, and the hydrogen is then electrochemically oxidized in a low temperature polymer electrolyte fuel cell to generate electric power. The heat generated in the fuel cell and the available heat in the exhaust gas is recovered to meet residential needs for hot water and space heating. Two fuel processor configurations have been studied. One of the configurations was explored to quantify the effects of design and operating parameters, which include pressure, temperature, and steam-to-carbon ratio in the fuel processor, and fuel utilization in the fuel cell. The second configuration applied the lessons from the study of the first configuration to increase the CHP efficiency. Results from the two configurations allow a quantitative comparison of the design alternatives. The analyses showed that these systems can operate at electrical efficiencies of ∼46% and combined heat and power efficiencies of ∼90%.
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GE Global Research (prime contractor) wasmore » awarded a contract from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE Global Research, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal and air are simultaneously converted into separate streams of (1) high-purity hydrogen that can be utilized in fuel cells or turbines, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure vitiated air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on ASPEN Plus process modeling, has an estimated process efficiency of 6 percentage points higher than IGCC with conventional CO{sub 2} separation. The current R&D program will determine the feasibility of the integrated UFP technology through pilot-scale testing, and will investigate operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates experimental testing, modeling and economic studies to demonstrate the UFP technology. This is the fourteenth quarterly technical progress report for the UFP program, which is supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974) and GE. This report summarizes program accomplishments for the period starting January 1, 2004 and ending March 31, 2004. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale shakedown and performance testing, program management and technology transfer.« less
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research (GEGR) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEGR (prime contractor) was awardedmore » a contract from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GEGR, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal and air are simultaneously converted into separate streams of (1) high-purity hydrogen that can be utilized in fuel cells or turbines, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure vitiated air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on Aspen Plus process modeling, has an estimated process efficiency of 6% higher than IGCC with conventional CO{sub 2} separation. The current R&D program will determine the feasibility of the integrated UFP technology through pilot-scale testing, and will investigate operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates experimental testing, modeling and economic studies to demonstrate the UFP technology. This is the third annual technical progress report for the UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting October 1, 2002 and ending September 30, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, bench-scale experimental testing, process modeling, pilot-scale system design and assembly, and program management.« less
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GE Global Research (prime contractor) wasmore » awarded a contract from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE Global Research, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal and air are simultaneously converted into separate streams of (1) high-purity hydrogen that can be utilized in fuel cells or turbines, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure vitiated air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on ASPEN Plus process modeling, has an estimated process efficiency of 6% higher than IGCC with conventional CO{sub 2} separation. The current R&D program will determine the feasibility of the integrated UFP technology through pilot-scale testing, and will investigate operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates experimental testing, modeling and economic studies to demonstrate the UFP technology. This is the thirteenth quarterly technical progress report for the UFP program, which is supported by U.S. DOE NETL under Contract No. DE-FC26-00FT40974. This report summarizes program accomplishments for the period starting October 1, 2003 and ending December 31, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale assembly, pilot-scale demonstration and program management and technology transfer.« less
FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Janice West; Arnaldo Frydman
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GE Global Research (prime contractor) wasmore » awarded a contract from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE Global Research, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal and air are simultaneously converted into separate streams of (1) high-purity hydrogen that can be utilized in fuel cells or turbines, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure vitiated air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on ASPEN Plus process modeling, has an estimated process efficiency of 6 percentage points higher than IGCC with conventional CO{sub 2} separation. The current R&D program has determined the feasibility of the integrated UFP technology through pilot-scale testing, and investigated operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrated experimental testing, modeling and economic studies to demonstrate the UFP technology. This is the fifteenth quarterly technical progress report for the UFP program, which is supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974) and GE. This report summarizes program accomplishments for the period starting April 1, 2004 and ending June 30, 2004. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale testing, kinetic modeling, program management and technology transfer.« less
Comparison of receptor models for source apportionment of the PM10 in Zaragoza (Spain).
Callén, M S; de la Cruz, M T; López, J M; Navarro, M V; Mastral, A M
2009-08-01
Receptor models are useful to understand the chemical and physical characteristics of air pollutants by identifying their sources and by estimating contributions of each source to receptor concentrations. In this work, three receptor models based on principal component analysis with absolute principal component scores (PCA-APCS), Unmix and positive matrix factorization (PMF) were applied to study for the first time the apportionment of the airborne particulate matter less or equal than 10microm (PM10) in Zaragoza, Spain, during 1year sampling campaign (2003-2004). The PM10 samples were characterized regarding their concentrations in inorganic components: trace elements and ions and also organic components: polycyclic aromatic hydrocarbons (PAH) not only in the solid phase but also in the gas phase. A comparison of the three receptor models was carried out in order to do a more robust characterization of the PM10. The three models predicted that the major sources of PM10 in Zaragoza were related to natural sources (60%, 75% and 47%, respectively, for PCA-APCS, Unmix and PMF) although anthropogenic sources also contributed to PM10 (28%, 25% and 39%). With regard to the anthropogenic sources, while PCA and PMF allowed high discrimination in the sources identification associated with different combustion sources such as traffic and industry, fossil fuel, biomass and fuel-oil combustion, heavy traffic and evaporative emissions, the Unmix model only allowed the identification of industry and traffic emissions, evaporative emissions and heavy-duty vehicles. The three models provided good correlations between the experimental and modelled PM10 concentrations with major precision and the closest agreement between the PMF and PCA models.
Compact gasoline fuel processor for passenger vehicle APU
NASA Astrophysics Data System (ADS)
Severin, Christopher; Pischinger, Stefan; Ogrzewalla, Jürgen
Due to the increasing demand for electrical power in today's passenger vehicles, and with the requirements regarding fuel consumption and environmental sustainability tightening, a fuel cell-based auxiliary power unit (APU) becomes a promising alternative to the conventional generation of electrical energy via internal combustion engine, generator and battery. It is obvious that the on-board stored fuel has to be used for the fuel cell system, thus, gasoline or diesel has to be reformed on board. This makes the auxiliary power unit a complex integrated system of stack, air supply, fuel processor, electrics as well as heat and water management. Aside from proving the technical feasibility of such a system, the development has to address three major barriers:start-up time, costs, and size/weight of the systems. In this paper a packaging concept for an auxiliary power unit is presented. The main emphasis is placed on the fuel processor, as good packaging of this large subsystem has the strongest impact on overall size. The fuel processor system consists of an autothermal reformer in combination with water-gas shift and selective oxidation stages, based on adiabatic reactors with inter-cooling. The configuration was realized in a laboratory set-up and experimentally investigated. The results gained from this confirm a general suitability for mobile applications. A start-up time of 30 min was measured, while a potential reduction to 10 min seems feasible. An overall fuel processor efficiency of about 77% was measured. On the basis of the know-how gained by the experimental investigation of the laboratory set-up a packaging concept was developed. Using state-of-the-art catalyst and heat exchanger technology, the volumes of these components are fixed. However, the overall volume is higher mainly due to mixing zones and flow ducts, which do not contribute to the chemical or thermal function of the system. Thus, the concept developed mainly focuses on minimization of those component volumes. Therefore, the packaging utilizes rectangular catalyst bricks and integrates flow ducts into the heat exchangers. A concept is presented with a 25 l fuel processor volume including thermal isolation for a 3 kW el auxiliary power unit. The overall size of the system, i.e. including stack, air supply and auxiliaries can be estimated to 44 l.
Ahluwalia, Rajesh K [Burr Ridge, IL; Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL
2011-08-02
An improved fuel processor for fuel cells is provided whereby the startup time of the processor is less than sixty seconds and can be as low as 30 seconds, if not less. A rapid startup time is achieved by either igniting or allowing a small mixture of air and fuel to react over and warm up the catalyst of an autothermal reformer (ATR). The ATR then produces combustible gases to be subsequently oxidized on and simultaneously warm up water-gas shift zone catalysts. After normal operating temperature has been achieved, the proportion of air included with the fuel is greatly diminished.
Edge Diffusion Flame Propagation and Stabilization Studied
NASA Technical Reports Server (NTRS)
Takahashi, Fumiaki; Katta, Viswanath R.
2004-01-01
In most practical combustion systems or fires, fuel and air are initially unmixed, thus forming diffusion flames. As a result of flame-surface interactions, the diffusion flame often forms an edge, which may attach to burner walls, spread over condensed fuel surfaces, jump to another location through the fuel-air mixture formed, or extinguish by destabilization (blowoff). Flame holding in combustors is necessary to achieve design performance and safe operation of the system. Fires aboard spacecraft behave differently from those on Earth because of the absence of buoyancy in microgravity. This ongoing in-house flame-stability research at the NASA Glenn Research Center is important in spacecraft fire safety and Earth-bound combustion systems.
Efficiency of a solid polymer fuel cell operating on ethanol
NASA Astrophysics Data System (ADS)
Ioannides, Theophilos; Neophytides, Stylianos
The efficiency of a solid polymer fuel cell (SPFC) system operating on ethanol fuel has been analyzed as a function of operating parameters focusing on vehicle and stationary applications. Two types of ethanol processors — employing either steam reforming or partial oxidation (POX) steps — have been considered and their performance has been investigated by thermodynamic analysis. SPFC operation has been analyzed by an available parametric model. It has been found that dilute ethanol-water mixtures (˜55% v/v EtOH) are the most suitable for stationary applications with a steam reformer (SR)-SPFC system. Regarding vehicle applications, pure ethanol (˜95% v/v EtOH) appears to be the best fuel with a POX-SPFC system. Efficiencies in the case of an ideal ethanol processor can be of the order of 60% under low load conditions and 30-35% at peak power, while efficiencies with an actual processor are 80-85% of the above values.
Fuel processor and method for generating hydrogen for fuel cells
Ahmed, Shabbir [Naperville, IL; Lee, Sheldon H. D. [Willowbrook, IL; Carter, John David [Bolingbrook, IL; Krumpelt, Michael [Naperville, IL; Myers, Deborah J [Lisle, IL
2009-07-21
A method of producing a H.sub.2 rich gas stream includes supplying an O.sub.2 rich gas, steam, and fuel to an inner reforming zone of a fuel processor that includes a partial oxidation catalyst and a steam reforming catalyst or a combined partial oxidation and stream reforming catalyst. The method also includes contacting the O.sub.2 rich gas, steam, and fuel with the partial oxidation catalyst and the steam reforming catalyst or the combined partial oxidation and stream reforming catalyst in the inner reforming zone to generate a hot reformate stream. The method still further includes cooling the hot reformate stream in a cooling zone to produce a cooled reformate stream. Additionally, the method includes removing sulfur-containing compounds from the cooled reformate stream by contacting the cooled reformate stream with a sulfur removal agent. The method still further includes contacting the cooled reformate stream with a catalyst that converts water and carbon monoxide to carbon dioxide and H.sub.2 in a water-gas-shift zone to produce a final reformate stream in the fuel processor.
MAX UnMix: A web application for unmixing magnetic coercivity distributions
NASA Astrophysics Data System (ADS)
Maxbauer, Daniel P.; Feinberg, Joshua M.; Fox, David L.
2016-10-01
It is common in the fields of rock and environmental magnetism to unmix magnetic mineral components using statistical methods that decompose various types of magnetization curves (e.g., acquisition, demagnetization, or backfield). A number of programs have been developed over the past decade that are frequently used by the rock magnetic community, however many of these programs are either outdated or have obstacles inhibiting their usability. MAX UnMix is a web application (available online at http://www.irm.umn.edu/maxunmix), built using the shiny package for R studio, that can be used for unmixing coercivity distributions derived from magnetization curves. Here, we describe in detail the statistical model underpinning the MAX UnMix web application and discuss the programs functionality. MAX UnMix is an improvement over previous unmixing programs in that it is designed to be user friendly, runs as an independent website, and is platform independent.
Pigments identification of paintings using subspace distance unmixing algorithm
NASA Astrophysics Data System (ADS)
Li, Bin; Lyu, Shuqiang; Zhang, Dafeng; Dong, Qinghao
2018-04-01
In the digital protection of the cultural relics, the identification of the pigment mixtures on the surface of the painting has been the research spot for many years. In this paper, as a hyperspectral unmixing algorithm, sub-space distance unmixing is introduced to solve the problem of recognition of pigments mixture in paintings. Firstly, some mixtures of different pigments are designed to measure their reflectance spectra using spectrometer. Moreover, the factors affecting the unmixing accuracy of pigments' mixtures are discussed. The unmixing results of two cases with and without rice paper and its underlay as endmembers are compared. The experiment results show that the algorithm is able to unmixing the pigments effectively and the unmixing accuracy can be improved after considering the influence of spectra of the rich paper and the underlaying material.
A diesel fuel processor for fuel-cell-based auxiliary power unit applications
NASA Astrophysics Data System (ADS)
Samsun, Remzi Can; Krekel, Daniel; Pasel, Joachim; Prawitz, Matthias; Peters, Ralf; Stolten, Detlef
2017-07-01
Producing a hydrogen-rich gas from diesel fuel enables the efficient generation of electricity in a fuel-cell-based auxiliary power unit. In recent years, significant progress has been achieved in diesel reforming. One issue encountered is the stable operation of water-gas shift reactors with real reformates. A new fuel processor is developed using a commercial shift catalyst. The system is operated using optimized start-up and shut-down strategies. Experiments with diesel and kerosene fuels show slight performance drops in the shift reactor during continuous operation for 100 h. CO concentrations much lower than the target value are achieved during system operation in auxiliary power unit mode at partial loads of up to 60%. The regeneration leads to full recovery of the shift activity. Finally, a new operation strategy is developed whereby the gas hourly space velocity of the shift stages is re-designed. This strategy is validated using different diesel and kerosene fuels, showing a maximum CO concentration of 1.5% at the fuel processor outlet under extreme conditions, which can be tolerated by a high-temperature PEFC. The proposed operation strategy solves the issue of strong performance drop in the shift reactor and makes this technology available for reducing emissions in the transportation sector.
Hybrid fuel cell/diesel generation total energy system, part 2
NASA Astrophysics Data System (ADS)
Blazek, C. F.
1982-11-01
Meeting the Goldstone Deep Space Communications Complex (DGSCC) electrical and thermal requirements with the existing system was compared with using fuel cells. Fuel cell technology selection was based on a 1985 time frame for installation. The most cost-effective fuel feedstock for fuel cell application was identified. Fuels considered included diesel oil, natural gas, methanol and coal. These fuel feedstocks were considered not only on the cost and efficiency of the fuel conversion process, but also on complexity and integration of the fuel processor on system operation and thermal energy availability. After a review of fuel processor technology, catalytic steam reformer technology was selected based on the ease of integration and the economics of hydrogen production. The phosphoric acid fuel cell was selected for application at the GDSCC due to its commercial readiness for near term application. Fuel cell systems were analyzed for both natural gas and methanol feedstock. The subsequent economic analysis indicated that a natural gas fueled system was the most cost effective of the cases analyzed.
Hybrid fuel cell/diesel generation total energy system, part 2
NASA Technical Reports Server (NTRS)
Blazek, C. F.
1982-01-01
Meeting the Goldstone Deep Space Communications Complex (DGSCC) electrical and thermal requirements with the existing system was compared with using fuel cells. Fuel cell technology selection was based on a 1985 time frame for installation. The most cost-effective fuel feedstock for fuel cell application was identified. Fuels considered included diesel oil, natural gas, methanol and coal. These fuel feedstocks were considered not only on the cost and efficiency of the fuel conversion process, but also on complexity and integration of the fuel processor on system operation and thermal energy availability. After a review of fuel processor technology, catalytic steam reformer technology was selected based on the ease of integration and the economics of hydrogen production. The phosphoric acid fuel cell was selected for application at the GDSCC due to its commercial readiness for near term application. Fuel cell systems were analyzed for both natural gas and methanol feedstock. The subsequent economic analysis indicated that a natural gas fueled system was the most cost effective of the cases analyzed.
System and method for controlling a combustor assembly
York, William David; Ziminsky, Willy Steve; Johnson, Thomas Edward; Stevenson, Christian Xavier
2013-03-05
A system and method for controlling a combustor assembly are disclosed. The system includes a combustor assembly. The combustor assembly includes a combustor and a fuel nozzle assembly. The combustor includes a casing. The fuel nozzle assembly is positioned at least partially within the casing and includes a fuel nozzle. The fuel nozzle assembly further defines a head end. The system further includes a viewing device configured for capturing an image of at least a portion of the head end, and a processor communicatively coupled to the viewing device, the processor configured to compare the image to a standard image for the head end.
NASA Astrophysics Data System (ADS)
Son, In-Hyuk; Shin, Woo-Cheol; Lee, Yong-Kul; Lee, Sung-Chul; Ahn, Jin-Gu; Han, Sang-Il; kweon, Ho-Jin; Kim, Ju-Yong; Kim, Moon-Chan; Park, Jun-Yong
A polymer electrolyte membrane fuel cell (PEMFC) system is developed to power a notebook computer. The system consists of a compact methanol-reforming system with a CO preferential oxidation unit, a 16-cell PEMFC stack, and a control unit for the management of the system with a d.c.-d.c. converter. The compact fuel-processor system (260 cm 3) generates about 1.2 L min -1 of reformate, which corresponds to 35 We, with a low CO concentration (<30 ppm, typically 0 ppm), and is thus proven to be capable of being targetted at notebook computers.
Automating spectral unmixing of AVIRIS data using convex geometry concepts
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1993-01-01
Spectral mixture analysis, or unmixing, has proven to be a useful tool in the semi-quantitative interpretation of AVIRIS data. Using a linear mixing model and a set of hypothesized endmember spectra, unmixing seeks to estimate the fractional abundance patterns of the various materials occurring within the imaged area. However, the validity and accuracy of the unmixing rest heavily on the 'user-supplied' set of endmember spectra. Current methods for emdmember determination are the weak link in the unmixing chain.
Fiber optic sensors for gas turbine control
NASA Technical Reports Server (NTRS)
Shu, Emily Yixie (Inventor); Petrucco, Louis Jacob (Inventor); Daum, Wolfgang (Inventor)
2005-01-01
An apparatus for detecting flashback occurrences in a premixed combustor system having at least one fuel nozzle includes at least one photodetector and at least one fiber optic element coupled between the at least one photodetector and a test region of the combustor system wherein a respective flame of the fuel nozzle is not present under normal operating conditions. A signal processor monitors a signal of the photodetector. The fiber optic element can include at least one optical fiber positioned within a protective tube. The fiber optic element can include two fiber optic elements coupled to the test region. The optical fiber and the protective tube can have lengths sufficient to situate the photodetector outside of an engine compartment. A plurality of fuel nozzles and a plurality of fiber optic elements can be used with the fiber optic elements being coupled to respective fuel nozzles and either to the photodetector or, wherein a plurality of photodetectors are used, to respective ones of the plurality of photodetectors. The signal processor can include a digital signal processor.
Fiber optic sensors for gas turbine control
NASA Technical Reports Server (NTRS)
Shu, Emily Yixie (Inventor); Brown, Dale Marius (Inventor); Petrucco, Louis Jacob (Inventor); Lovett, Jeffery Allan (Inventor); Daum, Wolfgang (Inventor); Dunki-Jacobs, Robert John (Inventor)
2003-01-01
An apparatus for detecting flashback occurrences in a premixed combustor system having at least one fuel nozzle includes at least one photodetector and at least one fiber optic element coupled between the at least one photodetector and a test region of the combustor system wherein a respective flame of the fuel nozzle is not present under normal operating conditions. A signal processor monitors a signal of the photodetector. The fiber optic element can include at least one optical fiber positioned within a protective tube. The fiber optic element can include two fiber optic elements coupled to the test region. The optical fiber and the protective tube can have lengths sufficient to situate the photodetector outside of an engine compartment. A plurality of fuel nozzles and a plurality of fiber optic elements can be used with the fiber optic elements being coupled to respective fuel nozzles and either to the photodetector or, wherein a plurality of photodetectors are used, to respective ones of the plurality of photodetectors. The signal processor can include a digital signal processor.
Fiber optic sensors for gas turbine control
NASA Technical Reports Server (NTRS)
Shu, Emily Yixie (Inventor); Brown, Dale Marius (Inventor); Petrucco, Louis Jacob (Inventor); Lovett, Jeffery Allan (Inventor); Daum, Wolfgang (Inventor); Dunki-Jacobs, Robert John (Inventor)
1999-01-01
An apparatus for detecting flashback occurrences in a premixed combustor system having at least one fuel nozzle includes at least one photodetector and at least one fiber optic element coupled between the at least one photodetector and a test region of the combustor system wherein a respective flame of the fuel nozzle is not present under normal operating conditions. A signal processor monitors a signal of the photodetector. The fiber optic element can include at least one optical fiber positioned within a protective tube. The fiber optic element can include two fiber optic elements coupled to the test region. The optical fiber and the protective tube can have lengths sufficient to situate the photodetector outside of an engine compartment. A plurality of fuel nozzles and a plurality of fiber optic elements can be used with the fiber optic elements being coupled to respective fuel nozzles and either to the photodetector or, wherein a plurality of photodetectors are used, to respective ones of the plurality of photodetectors. The signal processor can include a digital signal processor.
NASA Astrophysics Data System (ADS)
Karstedt, Jörg; Ogrzewalla, Jürgen; Severin, Christopher; Pischinger, Stefan
In this work, the concept development, system layout, component simulation and the overall DOE system optimization of a HT-PEM fuel cell APU with a net electric power output of 4.5 kW and an onboard methane fuel processor are presented. A highly integrated system layout has been developed that enables fast startup within 7.5 min, a closed system water balance and high fuel processor efficiencies of up to 85% due to the recuperation of the anode offgas burner heat. The integration of the system battery into the load management enhances the transient electric performance and the maximum electric power output of the APU system. Simulation models of the carbon monoxide influence on HT-PEM cell voltage, the concentration and temperature profiles within the autothermal reformer (ATR) and the CO conversion rates within the watergas shift stages (WGSs) have been developed. They enable the optimization of the CO concentration in the anode gas of the fuel cell in order to achieve maximum system efficiencies and an optimized dimensioning of the ATR and WGS reactors. Furthermore a DOE optimization of the global system parameters cathode stoichiometry, anode stoichiometry, air/fuel ratio and steam/carbon ratio of the fuel processing system has been performed in order to achieve maximum system efficiencies for all system operating points under given boundary conditions.
MEMS-based fuel cells with integrated catalytic fuel processor and method thereof
Jankowski, Alan F [Livermore, CA; Morse, Jeffrey D [Martinez, CA; Upadhye, Ravindra S [Pleasanton, CA; Havstad, Mark A [Davis, CA
2011-08-09
Described herein is a means to incorporate catalytic materials into the fuel flow field structures of MEMS-based fuel cells, which enable catalytic reforming of a hydrocarbon based fuel, such as methane, methanol, or butane. Methods of fabrication are also disclosed.
Unmixing AVHRR Imagery to Assess Clearcuts and Forest Regrowth in Oregon
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Spanner, Michael A.
1995-01-01
Advanced Very High Resolution Radiometer imagery provides frequent and low-cost coverage of the earth, but its coarse spatial resolution (approx. 1.1 km by 1.1 km) does not lend itself to standard techniques of automated categorization of land cover classes because the pixels are generally mixed; that is, the extent of the pixel includes several land use/cover classes. Unmixing procedures were developed to extract land use/cover class signatures from mixed pixels, using Landsat Thematic Mapper data as a source for the training set, and to estimate fractions of class coverage within pixels. Application of these unmixing procedures to mapping forest clearcuts and regrowth in Oregon indicated that unmixing is a promising approach for mapping major trends in land cover with AVHRR bands 1 and 2. Including thermal bands by unmixing AVHRR bands 1-4 did not lead to significant improvements in accuracy, but experiments with unmixing these four bands did indicate that use of weighted least squares techniques might lead to improvements in other applications of unmixing.
Integral Fast Reactor fuel pin processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinskas, D.
1993-01-01
This report discusses the pin processor which receives metal alloy pins cast from recycled Integral Fast Reactor (IFR) fuel and prepares them for assembly into new IFR fuel elements. Either full length as-cast or precut pins are fed to the machine from a magazine, cut if necessary, and measured for length, weight, diameter and deviation from straightness. Accepted pins are loaded into cladding jackets located in a magazine, while rejects and cutting scraps are separated into trays. The magazines, trays, and the individual modules that perform the different machine functions are assembled and removed using remote manipulators and master-slaves.
Integral Fast Reactor fuel pin processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinskas, D.
1993-03-01
This report discusses the pin processor which receives metal alloy pins cast from recycled Integral Fast Reactor (IFR) fuel and prepares them for assembly into new IFR fuel elements. Either full length as-cast or precut pins are fed to the machine from a magazine, cut if necessary, and measured for length, weight, diameter and deviation from straightness. Accepted pins are loaded into cladding jackets located in a magazine, while rejects and cutting scraps are separated into trays. The magazines, trays, and the individual modules that perform the different machine functions are assembled and removed using remote manipulators and master-slaves.
NASA Astrophysics Data System (ADS)
Varady, M. J.; McLeod, L.; Meacham, J. M.; Degertekin, F. L.; Fedorov, A. G.
2007-09-01
Portable fuel cells are an enabling technology for high efficiency and ultra-high density distributed power generation, which is essential for many terrestrial and aerospace applications. A key element of fuel cell power sources is the fuel processor, which should have the capability to efficiently reform liquid fuels and produce high purity hydrogen that is consumed by the fuel cells. To this end, we are reporting on the development of two novel MEMS hydrogen generators with improved functionality achieved through an innovative process organization and system integration approach that exploits the advantages of transport and catalysis on the micro/nano scale. One fuel processor design utilizes transient, reverse-flow operation of an autothermal MEMS microreactor with an intimately integrated, micromachined ultrasonic fuel atomizer and a Pd/Ag membrane for in situ hydrogen separation from the product stream. The other design features a simpler, more compact planar structure with the atomized fuel ejected directly onto the catalyst layer, which is coupled to an integrated hydrogen selective membrane.
75 FR 68177 - Airworthiness Directives; The Boeing Company Model 757 and 767 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... and FUEL CONFIG discrete signals from the fuel quantity processor unit, and alerts the flightcrew of a... the FUEL CONFIG discrete signal, which disables both the FUEL CONFIG and LOW FUEL messages. Such... depleted below the minimum of 2,200 pounds. The EICAS receives both the LOW FUEL and FUEL CONFIG discrete...
Messiah College Biodiesel Fuel Generation Project Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zummo, Michael M; Munson, J; Derr, A
Many obvious and significant concerns arise when considering the concept of small-scale biodiesel production. Does the fuel produced meet the stringent requirements set by the commercial biodiesel industry? Is the process safe? How are small-scale producers collecting and transporting waste vegetable oil? How is waste from the biodiesel production process handled by small-scale producers? These concerns and many others were the focus of the research preformed in the Messiah College Biodiesel Fuel Generation project over the last three years. This project was a unique research program in which undergraduate engineering students at Messiah College set out to research the feasibilitymore » of small-biodiesel production for application on a campus of approximately 3000 students. This Department of Energy (DOE) funded research program developed out of almost a decade of small-scale biodiesel research and development work performed by students at Messiah College. Over the course of the last three years the research team focused on four key areas related to small-scale biodiesel production: Quality Testing and Assurance, Process and Processor Research, Process and Processor Development, and Community Education. The objectives for the Messiah College Biodiesel Fuel Generation Project included the following: 1. Preparing a laboratory facility for the development and optimization of processors and processes, ASTM quality assurance, and performance testing of biodiesel fuels. 2. Developing scalable processor and process designs suitable for ASTM certifiable small-scale biodiesel production, with the goals of cost reduction and increased quality. 3. Conduct research into biodiesel process improvement and cost optimization using various biodiesel feedstocks and production ingredients.« less
Combustor air flow control method for fuel cell apparatus
Clingerman, Bruce J.; Mowery, Kenneth D.; Ripley, Eugene V.
2001-01-01
A method for controlling the heat output of a combustor in a fuel cell apparatus to a fuel processor where the combustor has dual air inlet streams including atmospheric air and fuel cell cathode effluent containing oxygen depleted air. In all operating modes, an enthalpy balance is provided by regulating the quantity of the air flow stream to the combustor to support fuel cell processor heat requirements. A control provides a quick fast forward change in an air valve orifice cross section in response to a calculated predetermined air flow, the molar constituents of the air stream to the combustor, the pressure drop across the air valve, and a look up table of the orifice cross sectional area and valve steps. A feedback loop fine tunes any error between the measured air flow to the combustor and the predetermined air flow.
40 CFR 279.72 - On-specification used oil fuel.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 27 2014-07-01 2014-07-01 false On-specification used oil fuel. 279.72... (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Fuel Marketers § 279.72 On-specification used oil fuel. (a) Analysis of used oil fuel. A generator, transporter, processor/re-refiner, or...
Solid Oxide Fuel Cells Operating on Alternative and Renewable Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoxing; Quan, Wenying; Xiao, Jing
2014-09-30
This DOE project at the Pennsylvania State University (Penn State) initially involved Siemens Energy, Inc. to (1) develop new fuel processing approaches for using selected alternative and renewable fuels – anaerobic digester gas (ADG) and commercial diesel fuel (with 15 ppm sulfur) – in solid oxide fuel cell (SOFC) power generation systems; and (2) conduct integrated fuel processor – SOFC system tests to evaluate the performance of the fuel processors and overall systems. Siemens Energy Inc. was to provide SOFC system to Penn State for testing. The Siemens work was carried out at Siemens Energy Inc. in Pittsburgh, PA. Themore » unexpected restructuring in Siemens organization, however, led to the elimination of the Siemens Stationary Fuel Cell Division within the company. Unfortunately, this led to the Siemens subcontract with Penn State ending on September 23rd, 2010. SOFC system was never delivered to Penn State. With the assistance of NETL project manager, the Penn State team has since developed a collaborative research with Delphi as the new subcontractor and this work involved the testing of a stack of planar solid oxide fuel cells from Delphi.« less
Analysis of the energy efficiency of an integrated ethanol processor for PEM fuel cell systems
NASA Astrophysics Data System (ADS)
Francesconi, Javier A.; Mussati, Miguel C.; Mato, Roberto O.; Aguirre, Pio A.
The aim of this work is to investigate the energy integration and to determine the maximum efficiency of an ethanol processor for hydrogen production and fuel cell operation. Ethanol, which can be produced from renewable feedstocks or agriculture residues, is an attractive option as feed to a fuel processor. The fuel processor investigated is based on steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying simulation techniques and using thermodynamic models the performance of the complete system has been evaluated for a variety of operating conditions and possible reforming reactions pathways. These models involve mass and energy balances, chemical equilibrium and feasible heat transfer conditions (Δ T min). The main operating variables were determined for those conditions. The endothermic nature of the reformer has a significant effect on the overall system efficiency. The highest energy consumption is demanded by the reforming reactor, the evaporator and re-heater operations. To obtain an efficient integration, the heat exchanged between the reformer outgoing streams of higher thermal level (reforming and combustion gases) and the feed stream should be maximized. Another process variable that affects the process efficiency is the water-to-fuel ratio fed to the reformer. Large amounts of water involve large heat exchangers and the associated heat losses. A net electric efficiency around 35% was calculated based on the ethanol HHV. The responsibilities for the remaining 65% are: dissipation as heat in the PEMFC cooling system (38%), energy in the flue gases (10%) and irreversibilities in compression and expansion of gases. In addition, it has been possible to determine the self-sufficient limit conditions, and to analyze the effect on the net efficiency of the input temperatures of the clean-up system reactors, combustion preheating, expander unit and crude ethanol as fuel.
On-board diesel autothermal reforming for PEM fuel cells: Simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cozzolino, Raffaello, E-mail: raffaello.cozzolino@unicusano.it; Tribioli, Laura
2015-03-10
Alternative power sources are nowadays the only option to provide a quick response to the current regulations on automotive pollutant emissions. Hydrogen fuel cell is one promising solution, but the nature of the gas is such that the in-vehicle conversion of other fuels into hydrogen is necessary. In this paper, autothermal reforming, for Diesel on-board conversion into a hydrogen-rich gas suitable for PEM fuel cells, has investigated using the simulation tool Aspen Plus. A steady-state model has been developed to analyze the fuel processor and the overall system performance. The components of the fuel processor are: the fuel reforming reactor,more » two water gas shift reactors, a preferential oxidation reactor and H{sub 2} separation unit. The influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, and temperature on the process components has been analyzed in-depth and results are presented.« less
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
The underlying philosophy of Unmix is to let the data speak for itself. Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample...
40 CFR 279.72 - On-specification used oil fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of § 279.11 by performing analyses or obtaining copies of analyses or other information documenting...-specification used oil fuel. (a) Analysis of used oil fuel. A generator, transporter, processor/re-refiner, or... meets the specifications for used oil fuel under § 279.11, must keep copies of analyses of the used oil...
Fuel processing for PEM fuel cells: transport and kinetic issues of system design
NASA Astrophysics Data System (ADS)
Zalc, J. M.; Löffler, D. G.
In light of the distribution and storage issues associated with hydrogen, efficient on-board fuel processing will be a significant factor in the implementation of PEM fuel cells for automotive applications. Here, we apply basic chemical engineering principles to gain insight into the factors that limit performance in each component of a fuel processor. A system consisting of a plate reactor steam reformer, water-gas shift unit, and preferential oxidation reactor is used as a case study. It is found that for a steam reformer based on catalyst-coated foils, mass transfer from the bulk gas to the catalyst surface is the limiting process. The water-gas shift reactor is expected to be the largest component of the fuel processor and is limited by intrinsic catalyst activity, while a successful preferential oxidation unit depends on strict temperature control in order to minimize parasitic hydrogen oxidation. This stepwise approach of sequentially eliminating rate-limiting processes can be used to identify possible means of performance enhancement in a broad range of applications.
Ahmed, Shabbir; Papadias, Dionissios D.; Lee, Sheldon H.D.; Ahluwalia, Rajesh K.
2014-08-26
The invention provides a method for reforming fuel, the method comprising contacting the fuel to an oxidation catalyst so as to partially oxidize the fuel and generate heat; warming incoming fuel with the heat while simultaneously warming a reforming catalyst with the heat; and reacting the partially oxidized fuel with steam using the reforming catalyst.
Hydrogen Generation Via Fuel Reforming
NASA Astrophysics Data System (ADS)
Krebs, John F.
2003-07-01
Reforming is the conversion of a hydrocarbon based fuel to a gas mixture that contains hydrogen. The H2 that is produced by reforming can then be used to produce electricity via fuel cells. The realization of H2-based power generation, via reforming, is facilitated by the existence of the liquid fuel and natural gas distribution infrastructures. Coupling these same infrastructures with more portable reforming technology facilitates the realization of fuel cell powered vehicles. The reformer is the first component in a fuel processor. Contaminants in the H2-enriched product stream, such as carbon monoxide (CO) and hydrogen sulfide (H2S), can significantly degrade the performance of current polymer electrolyte membrane fuel cells (PEMFC's). Removal of such contaminants requires extensive processing of the H2-rich product stream prior to utilization by the fuel cell to generate electricity. The remaining components of the fuel processor remove the contaminants in the H2 product stream. For transportation applications the entire fuel processing system must be as small and lightweight as possible to achieve desirable performance requirements. Current efforts at Argonne National Laboratory are focused on catalyst development and reactor engineering of the autothermal processing train for transportation applications.
Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint
NASA Astrophysics Data System (ADS)
Khoshsokhan, S.; Rajabi, R.; Zayyani, H.
2017-09-01
Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.
Comparison of the CENTRM resonance processor to the NITAWL resonance processor in SCALE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenbach, D.F.; Petrie, L.M.
1998-01-01
This report compares the MTAWL and CENTRM resonance processors in the SCALE code system. The cases examined consist of the International OECD/NEA Criticality Working Group Benchmark 20 problem. These cases represent fuel pellets partially dissolved in a borated solution. The assumptions inherent to the Nordheim Integral Treatment, used in MTAWL, are not valid for these problems. CENTRM resolves this limitation by explicitly calculating a problem dependent point flux from point cross sections, which is then used to create group cross sections.
Mapping target signatures via partial unmixing of AVIRIS data
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.; Kruse, Fred A.; Green, Robert O.
1995-01-01
A complete spectral unmixing of a complicated AVIRIS scene may not always be possible or even desired. High quality data of spectrally complex areas are very high dimensional and are consequently difficult to fully unravel. Partial unmixing provides a method of solving only that fraction of the data inversion problem that directly relates to the specific goals of the investigation. Many applications of imaging spectrometry can be cast in the form of the following question: 'Are my target signatures present in the scene, and if so, how much of each target material is present in each pixel?' This is a partial unmixing problem. The number of unmixing endmembers is one greater than the number of spectrally defined target materials. The one additional endmember can be thought of as the composite of all the other scene materials, or 'everything else'. Several workers have proposed partial unmixing schemes for imaging spectrometry data, but each has significant limitations for operational application. The low probability detection methods described by Farrand and Harsanyi and the foreground-background method of Smith et al are both examples of such partial unmixing strategies. The new method presented here builds on these innovative analysis concepts, combining their different positive attributes while attempting to circumvent their limitations. This new method partially unmixes AVIRIS data, mapping apparent target abundances, in the presence of an arbitrary and unknown spectrally mixed background. It permits the target materials to be present in abundances that drive significant portions of the scene covariance. Furthermore it does not require a priori knowledge of the background material spectral signatures. The challenge is to find the proper projection of the data that hides the background variance while simultaneously maximizing the variance amongst the targets.
NASA Astrophysics Data System (ADS)
Nehter, Pedro; Hansen, John Bøgild; Larsen, Peter Koch
Ultra-low sulphur diesel (ULSD) is the preferred fuel for mobile auxiliary power units (APU). The commercial available technologies in the kW-range are combustion engine based gensets, achieving system efficiencies about 20%. Solid oxide fuel cells (SOFC) promise improvements with respect to efficiency and emission, particularly for the low power range. Fuel processing methods i.e., catalytic partial oxidation, autothermal reforming and steam reforming have been demonstrated to operate on diesel with various sulphur contents. The choice of fuel processing method strongly affects the SOFC's system efficiency and power density. This paper investigates the impact of fuel processing methods on the economical potential in SOFC APUs, taking variable and capital cost into account. Autonomous concepts without any external water supply are compared with anode recycle configurations. The cost of electricity is very sensitive on the choice of the O/C ratio and the temperature conditions of the fuel processor. A sensitivity analysis is applied to identify the most cost effective concept for different economic boundary conditions. The favourite concepts are discussed with respect to technical challenges and requirements operating in the presence of sulphur.
MAX UnMix: Introducing a new web application for unmixing magnetic coercivity distributions
NASA Astrophysics Data System (ADS)
Feinberg, J. M.; Maxbauer, D.; Fox, D. L.
2016-12-01
Magnetic minerals are present in a wide variety of natural systems and are often indicative of the natural or anthropogenic processes that led to their deposition, formation, or transformation. Unmixing the contribution of magnetic components to bulk field-dependent magnetization curves has become increasingly common in environmental and rock magnetic studies and has enhanced our ability to fingerprint the magnetic signatures of magnetic minerals with distinct compositions, grain sizes, and origins. A variety of programs have been developed over the past two decades to allow researchers to deconvolve field-dependent magnetization curves for these purposes, however many of these programs are either outdated or have obstacles that inhibit the programs usability. MAX UnMix is a new web application (available online at http://www.irm.umn.edu/maxunmix) built using the `shiny' package for R-studio that can be used to process coercivity distributions derived from magnetization curves (acquisition, demagnetization, or backfield data) via an online user-interface. Here, we use example datasets from lake sediments and paleosols to present details of the MAX UnMix model and the programs functionality. MAX UnMix is designed to be accessible, user friendly, and should serve as a useful resource for future research.
Fuel Cell Power Plant Initiative. Volume 2; Preliminary Design of a Fixed-Base LFP/SOFC Power System
NASA Technical Reports Server (NTRS)
Veyo, S.E.
1997-01-01
This report documents the preliminary design for a military fixed-base power system of 3 MWe nominal capacity using Westinghouse's tubular Solid Oxide Fuel Cell [SOFC] and Haldor Topsoe's logistic fuels processor [LFP]. The LFP provides to the fuel cell a methane rich sulfur free fuel stream derived from either DF-2 diesel fuel, or JP-8 turbine fuel. Fuel cells are electrochemical devices that directly convert the chemical energy contained in fuels such as hydrogen, natural gas, or coal gas into electricity at high efficiency with no intermediate heat engine or dynamo. The SOFC is distinguished from other fuel cell types by its solid state ceramic structure and its high operating temperature, nominally 1000'C. The SOFC pioneered by Westinghouse has a tubular geometry closed at one end. A power generation stack is formed by aggregating many cells in an ordered array. The Westinghouse stack design is distinguished from other fuel cell stacks by the complete absence of high integrity seals between cell elements, cells, and between stack and manifolds. Further, the reformer for natural gas [predominantly methane] and the stack are thermally and hydraulically integrated with no requirement for process water. The technical viability of combining the tubular SOFC and a logistic fuels processor was demonstrated at 27 kWe scale in a test program sponsored by the Advanced Research Projects Agency [ARPA) and carried out at the Southern California Edison's [SCE] Highgrove generating station near San Bernardino, California in 1994/95. The LFP was a breadboard design supplied by Haldor Topsoe, Inc. under subcontract to Westinghouse. The test program was completely successful. The LFP fueled the SOFC for 766 hours on JP-8 and 1555 hours of DF-2. In addition, the fuel cell operated for 3261 hours on pipeline natural gas. Over the 5582 hours of operation, the SOFC generated 118 MVVH of electricity with no perceptible degradation in performance. The LFP processed military specification JP-8 and DF-2 removing the sulfur and reforming these liquid fuels to a methane rich gaseous fuel. Results of this program are documented in a companion report titled 'Final Report-Solid Oxide Fuel Cell/ Logistic Fuels Processor 27 kWe Power System'.
Fuel processing in integrated micro-structured heat-exchanger reactors
NASA Astrophysics Data System (ADS)
Kolb, G.; Schürer, J.; Tiemann, D.; Wichert, M.; Zapf, R.; Hessel, V.; Löwe, H.
Micro-structured fuel processors are under development at IMM for different fuels such as methanol, ethanol, propane/butane (LPG), gasoline and diesel. The target application are mobile, portable and small scale stationary auxiliary power units (APU) based upon fuel cell technology. The key feature of the systems is an integrated plate heat-exchanger technology which allows for the thermal integration of several functions in a single device. Steam reforming may be coupled with catalytic combustion in separate flow paths of a heat-exchanger. Reactors and complete fuel processors are tested up to the size range of 5 kW power output of a corresponding fuel cell. On top of reactor and system prototyping and testing, catalyst coatings are under development at IMM for numerous reactions such as steam reforming of LPG, ethanol and methanol, catalytic combustion of LPG and methanol, and for CO clean-up reactions, namely water-gas shift, methanation and the preferential oxidation of carbon monoxide. These catalysts are investigated in specially developed testing reactors. In selected cases 1000 h stability testing is performed on catalyst coatings at weight hourly space velocities, which are sufficiently high to meet the demands of future fuel processing reactors.
Stability of lanthanum oxide-based H 2S sorbents in realistic fuel processor/fuel cell operation
NASA Astrophysics Data System (ADS)
Valsamakis, Ioannis; Si, Rui; Flytzani-Stephanopoulos, Maria
We report that lanthana-based sulfur sorbents are an excellent choice as once-through chemical filters for the removal of trace amounts of H 2S and COS from any fuel gas at temperatures matching those of solid oxide fuel cells. We have examined sorbents based on lanthana and Pr-doped lanthana with up to 30 at.% praseodymium, having high desulfurization efficiency, as measured by their ability to remove H 2S from simulated reformate gas streams to below 50 ppbv with corresponding sulfur capacity exceeding 50 mg S g sorbent -1 at 800 °C. Intermittent sorbent operation with air-rich boiler exhaust-type gas mixtures and with frequent shutdowns and restarts is possible without formation of lanthanide oxycarbonate phases. Upon restart, desulfurization continues from where it left at the end of the previous cycle. These findings are important for practical applications of these sorbents as sulfur polishing units of fuel gases in the presence of small or large amounts of water vapor, and with the regular shutdown/start-up operation practiced in fuel processors/fuel cell systems, both stationary and mobile, and of any size/scale.
Multi-fuel reformers for fuel cells used in transportation. Phase 1: Multi-fuel reformers
NASA Astrophysics Data System (ADS)
1994-05-01
DOE has established the goal, through the Fuel Cells in Transportation Program, of fostering the rapid development and commercialization of fuel cells as economic competitors for the internal combustion engine. Central to this goal is a safe feasible means of supplying hydrogen of the required purity to the vehicular fuel cell system. Two basic strategies are being considered: (1) on-board fuel processing whereby alternative fuels such as methanol, ethanol or natural gas stored on the vehicle undergo reformation and subsequent processing to produce hydrogen, and (2) on-board storage of pure hydrogen provided by stationary fuel processing plants. This report analyzes fuel processor technologies, types of fuel and fuel cell options for on-board reformation. As the Phase 1 of a multi-phased program to develop a prototype multi-fuel reformer system for a fuel cell powered vehicle, the objective of this program was to evaluate the feasibility of a multi-fuel reformer concept and to select a reforming technology for further development in the Phase 2 program, with the ultimate goal of integration with a DOE-designated fuel cell and vehicle configuration. The basic reformer processes examined in this study included catalytic steam reforming (SR), non-catalytic partial oxidation (POX) and catalytic partial oxidation (also known as Autothermal Reforming, or ATR). Fuels under consideration in this study included methanol, ethanol, and natural gas. A systematic evaluation of reforming technologies, fuels, and transportation fuel cell applications was conducted for the purpose of selecting a suitable multi-fuel processor for further development and demonstration in a transportation application.
Fuel-Flexible Gasification-Combustion Technology for Production of H2 and Sequestration-Ready CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Rizeq; Parag Kulkarni; Wei Wei
It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research is developing an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GE was awarded a contract frommore » U.S. DOE NETL to develop the UFP technology. Work on the Phase I program started in October 2000, and work on the Phase II effort started in April 2005. In the UFP technology, coal and air are simultaneously converted into separate streams of (1) high-purity hydrogen that can be utilized in fuel cells or turbines, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure vitiated air to produce electricity in a gas turbine. The process produces near-zero emissions with an estimated efficiency higher than IGCC with conventional CO2 separation. The Phase I R&D program established the feasibility of the integrated UFP technology through lab-, bench- and pilot-scale testing and investigated operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The Phase I effort integrated experimental testing, modeling and preliminary economic studies to demonstrate the UFP technology. The Phase II effort will focus on three high-risk areas: economics, sorbent attrition and lifetime, and product gas quality for turbines. The economic analysis will include estimating the capital cost as well as the costs of hydrogen and electricity for a full-scale UFP plant. These costs will be benchmarked with IGCC polygen costs for plants of similar size. Sorbent attrition and lifetime will be addressed via bench-scale experiments that monitor sorbent performance over time and by assessing materials interactions at operating conditions. The product gas from the third reactor (high-temperature vitiated air) will be evaluated to assess the concentration of particulates, pollutants and other impurities relative to the specifications required for gas turbine feed streams. This is the eighteenth quarterly technical progress report for the UFP program, which is supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974) and GE. This report summarizes program accomplishments for the Phase II period starting July 01, 2005 and ending September 30, 2005. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including process modeling, scale-up and economic analysis.« less
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei; Pan, Bin
2018-07-01
Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.
Controlled shutdown of a fuel cell
Clingerman, Bruce J.; Keskula, Donald H.
2002-01-01
A method is provided for the shutdown of a fuel cell system to relieve system overpressure while maintaining air compressor operation, and corresponding vent valving and control arrangement. The method and venting arrangement are employed in a fuel cell system, for instance a vehicle propulsion system, comprising, in fluid communication, an air compressor having an outlet for providing air to the system, a combustor operative to provide combustor exhaust to the fuel processor.
Deployable Fuel Cell Power Generator - Multi-Fuel Processor
2009-02-01
and the system operating pressure, while the separation efficiency depends on the evaporator design. Desulfurizer – A flow-through gas -solid or gas ...meeting the Executive Order (EO) 13423 and the Energy Policy Act of 2005 to improve energy efficiency and reduce greenhouse gas emissions 3 percent...use available fuel such as natural gas (methane) or propane. The ability to reform multitude of fuels can accelerate the introduction of more
Zeolites Remove Sulfur From Fuels
NASA Technical Reports Server (NTRS)
Voecks, Gerald E.; Sharma, Pramod K.
1991-01-01
Zeolites remove substantial amounts of sulfur compounds from diesel fuel under relatively mild conditions - atmospheric pressure below 300 degrees C. Extracts up to 60 percent of sulfur content of high-sulfur fuel. Applicable to petroleum refineries, natural-gas processors, electric powerplants, and chemical-processing plants. Method simpler and uses considerably lower pressure than current industrial method, hydro-desulfurization. Yields cleaner emissions from combustion of petroleum fuels, and protects catalysts from poisoning by sulfur.
Waste Vegetable Oil as an Alternative Fuel for Diesel Vehicles
2009-03-01
processor has a 160 gallon capacity, a fuel dryer , and features automatic mixing of the chemicals. The chemicals needed consist of lye (sodium...to distinguish it as tax-exempt. Fuel taxes are reported to the Internal Revenue Service ( IRS ) when the fuel is distributed to the “Service...collected in the commercial market. The refiner will pay the tax per gallon directly to the 22 IRS . When the fuel is sold, the end user pays the tax
Method for generating hydrogen for fuel cells
Ahmed, Shabbir; Lee, Sheldon H. D.; Carter, John David; Krumpelt, Michael
2004-03-30
A method of producing a H.sub.2 rich gas stream includes supplying an O.sub.2 rich gas, steam, and fuel to an inner reforming zone of a fuel processor that includes a partial oxidation catalyst and a steam reforming catalyst or a combined partial oxidation and stream reforming catalyst. The method also includes contacting the O.sub.2 rich gas, steam, and fuel with the partial oxidation catalyst and the steam reforming catalyst or the combined partial oxidation and stream reforming catalyst in the inner reforming zone to generate a hot reformate stream. The method still further includes cooling the hot reformate stream in a cooling zone to produce a cooled reformate stream. Additionally, the method includes removing sulfur-containing compounds from the cooled reformate stream by contacting the cooled reformate stream with a sulfur removal agent. The method still further includes contacting the cooled reformate stream with a catalyst that converts water and carbon monoxide to carbon dioxide and H.sub.2 in a water-gas-shift zone to produce a final reformate stream in the fuel processor.
NASA Astrophysics Data System (ADS)
Biset, S.; Nieto Deglioumini, L.; Basualdo, M.; Garcia, V. M.; Serra, M.
The aim of this work is to investigate which would be a good preliminary plantwide control structure for the process of Hydrogen production from bioethanol to be used in a proton exchange membrane (PEM) accounting only steady-state information. The objective is to keep the process under optimal operation point, that is doing energy integration to achieve the maximum efficiency. Ethanol, produced from renewable feedstocks, feeds a fuel processor investigated for steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying steady-state simulation techniques and using thermodynamic models the performance of the complete system with two different control structures have been evaluated for the most typical perturbations. A sensitivity analysis for the key process variables together with the rigorous operability requirements for the fuel cell are taking into account for defining acceptable plantwide control structure. This is the first work showing an alternative control structure applied to this kind of process.
Incorporating landscape fuel treatment modeling into the Forest Vegetation Simulator
Robert C. Seli; Alan A. Ager; Nicholas L. Crookston; Mark A. Finney; Berni Bahro; James K. Agee; Charles W. McHugh
2008-01-01
A simulation system was developed to explore how fuel treatments placed in random and optimal spatial patterns affect the growth and behavior of large fires when implemented at different rates over the course of five decades. The system consists of several command line programs linked together: (1) FVS with the Parallel Processor (PPE) and Fire and Fuels (FFE)...
Electrotransport-induced unmixing and decomposition of ternary oxides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chun, Jakyu; Yoo, Han-Ill, E-mail: hiyoo@snu.ac.kr; Martin, Manfred
A general expectation is that in a uniform oxygen activity atmosphere, cation electrotransport induces a ternary or higher oxide, e.g., AB{sub 1+ξ}O{sub 3+δ}, to kinetically unmix unless the electrochemical mobilities of, say, A{sup 2+}and B{sup 4+} cations are identically equal, and eventually to decompose into the component oxides AO and BO{sub 2} once the extent of unmixing exceeds the stability range of its nonmolecularity ξ. It has, however, earlier been reported [Yoo et al., Appl. Phys. Lett. 92, 252103 (2008)] that even a massive cation electrotransport induces BaTiO{sub 3} to neither unmix nor decompose even at a voltage far exceedingmore » the so-called decomposition voltage U{sub d}, a measure of the standard formation free energy of the oxide (|ΔG{sub f}{sup o}| = nFU{sub d}). Here, we report that as expected, NiTiO{sub 3} unmixes at any voltage and even decomposes if the voltage applied exceeds seemingly a threshold value larger than U{sub d}. We demonstrate experimentally that the electrochemical mobilities of Ni{sup 2+} and Ti{sup 4+} should be necessarily unequal for unmixing. Also, we show theoretically that equal cation mobilities appear to be a sufficiency for BaTiO{sub 3} only for a thermodynamic reason.« less
Multi-objective based spectral unmixing for hyperspectral images
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei
2017-02-01
Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.
Electrical start-up for diesel fuel processing in a fuel-cell-based auxiliary power unit
NASA Astrophysics Data System (ADS)
Samsun, Remzi Can; Krupp, Carsten; Tschauder, Andreas; Peters, Ralf; Stolten, Detlef
2016-01-01
As auxiliary power units in trucks and aircraft, fuel cell systems with a diesel and kerosene reforming capacity offer the dual benefit of reduced emissions and fuel consumption. In order to be commercially viable, these systems require a quick start-up time with low energy input. In pursuit of this end, this paper reports an electrical start-up strategy for diesel fuel processing. A transient computational fluid dynamics model is developed to optimize the start-up procedure of the fuel processor in the 28 kWth power class. The temperature trend observed in the experiments is reproducible to a high degree of accuracy using a dual-cell approach in ANSYS Fluent. Starting from a basic strategy, different options are considered for accelerating system start-up. The start-up time is reduced from 22 min in the basic case to 9.5 min, at an energy consumption of 0.4 kW h. Furthermore, an electrical wire is installed in the reformer to test the steam generation during start-up. The experimental results reveal that the generation of steam at 450 °C is possible within seconds after water addition to the reformer. As a result, the fuel processor can be started in autothermal reformer mode using the electrical concept developed in this work.
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
The 40-kw field test power plant modification and development, phase 2
NASA Technical Reports Server (NTRS)
1980-01-01
Progression on the design and development of a 40 KW fuel cell system for on-site installation for providing both thermal and electrical power is reported. Development of the steam reformer fuel processor, power section, inverter, control system, and thermal management and water treatment systems is described.
Unmix 6.0 Model for environmental data analyses
Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.
NASA Astrophysics Data System (ADS)
Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.
2018-04-01
Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.
EPA Unmix 6.0 Fundamentals & User Guide
Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample.
Oxide dispersion strengthened ferritic steels: a basic research joint program in France
NASA Astrophysics Data System (ADS)
Boutard, J.-L.; Badjeck, V.; Barguet, L.; Barouh, C.; Bhattacharya, A.; Colignon, Y.; Hatzoglou, C.; Loyer-Prost, M.; Rouffié, A. L.; Sallez, N.; Salmon-Legagneur, H.; Schuler, T.
2014-12-01
AREVA, CEA, CNRS, EDF and Mécachrome are funding a joint program of basic research on Oxide Dispersion Strengthened Steels (ODISSEE), in support to the development of oxide dispersion strengthened 9-14% Cr ferritic-martensitic steels for the fuel element cladding of future Sodium-cooled fast neutron reactors. The selected objectives and the results obtained so far will be presented concerning (i) physical-chemical characterisation of the nano-clusters as a function of ball-milling process, metallurgical conditions and irradiation, (ii) meso-scale understanding of failure mechanisms under dynamic loading and creep, and, (iii) kinetic modelling of nano-clusters nucleation and α/α‧ unmixing.
Extended Durability Testing of an External Fuel Processor for a Solid Oxide Fuel Cell (SOFC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark Perna; Anant Upadhyayula; Mark Scotto
2012-11-05
Durability testing was performed on an external fuel processor (EFP) for a solid oxide fuel cell (SOFC) power plant. The EFP enables the SOFC to reach high system efficiency (electrical efficiency up to 60%) using pipeline natural gas and eliminates the need for large quantities of bottled gases. LG Fuel Cell Systems Inc. (formerly known as Rolls-Royce Fuel Cell Systems (US) Inc.) (LGFCS) is developing natural gas-fired SOFC power plants for stationary power applications. These power plants will greatly benefit the public by reducing the cost of electricity while reducing the amount of gaseous emissions of carbon dioxide, sulfur oxides,more » and nitrogen oxides compared to conventional power plants. The EFP uses pipeline natural gas and air to provide all the gas streams required by the SOFC power plant; specifically those needed for start-up, normal operation, and shutdown. It includes a natural gas desulfurizer, a synthesis-gas generator and a start-gas generator. The research in this project demonstrated that the EFP could meet its performance and durability targets. The data generated helped assess the impact of long-term operation on system performance and system hardware. The research also showed the negative impact of ambient weather (both hot and cold conditions) on system operation and performance.« less
Fuzzy Logic Based Controller for a Grid-Connected Solid Oxide Fuel Cell Power Plant.
Chatterjee, Kalyan; Shankar, Ravi; Kumar, Amit
2014-10-01
This paper describes a mathematical model of a solid oxide fuel cell (SOFC) power plant integrated in a multimachine power system. The utilization factor of a fuel stack maintains steady state by tuning the fuel valve in the fuel processor at a rate proportional to a current drawn from the fuel stack. A suitable fuzzy logic control is used for the overall system, its objective being controlling the current drawn by the power conditioning unit and meet a desirable output power demand. The proposed control scheme is verified through computer simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sargent, S.A.
Apple pomace or presscake, was evaluated for suitability as a boiler feedstock for Michigan firms processing apple juice. Based upon the physical and chemical characteristics of pomace, handling/direct combustion systems were selected to conform with operating parameters typical of the industry. Fresh pomace flow rates of 29,030 and 88,998 kg/day (64,000 and 194,000 lb/day) were considered as representative of small and large processors, respectively, and the material was assumed to be dried to 15% moisture content (wet basis) prior to storage and combustion. Boilers utilizing pile-burning, fluidized-bed-combustion, and suspension-firing technologies were sized for each flow rate, resulting in energy productionmore » of 2930 and 8790 kW (10 and 30 million Btu/h), respectively. A life-cycle cost analysis was performed giving Average Annual Costs for the three handling/combustion system combinations (based on the Uniform Capital Recovery factor). An investment loan at 16% interest with a 5-year payback period was assumed. The break-even period for annual costs was calculated by anticipated savings incurred through reduction of fossil-fuel costs during a 5-month processing season. Large processors, producing more than 88,998 kg pomace/day, could economically convert to a suspension-fired system substituting for fuel oil, with break-even occurring after 4 months of operation of pomace per year. Small processors, producing less than 29,030 kg/day, could not currently convert to pomace combustion systems given these economic circumstances. A doubling of electrical-utility costs and changes in interest rates from 10 to 20% per year had only slight effects on the recovery of Average Annual Costs. Increases in fossil-fuel prices and the necessity to pay for pomace disposal reduced the cost-recovery period for all systems, making some systems feasible for small processors. 39 references, 13 figures, 10 tables.« less
Plant That Makes Fuel Out Of Garbage and Waste Called A Success
, to run a turbine to generate electricity or as a transportation fuel. Pathogens in the food municipal solid waste and food processing wastes. The plant was operated close to neighbors in a light market of $1 billion. Other potential customers include food processors and waste haulers, who must now
Fuel processor for fuel cell power system
Vanderborgh, Nicholas E.; Springer, Thomas E.; Huff, James R.
1987-01-01
A catalytic organic fuel processing apparatus, which can be used in a fuel cell power system, contains within a housing a catalyst chamber, a variable speed fan, and a combustion chamber. Vaporized organic fuel is circulated by the fan past the combustion chamber with which it is in indirect heat exchange relationship. The heated vaporized organic fuel enters a catalyst bed where it is converted into a desired product such as hydrogen needed to power the fuel cell. During periods of high demand, air is injected upstream of the combustion chamber and organic fuel injection means to burn with some of the organic fuel on the outside of the combustion chamber, and thus be in direct heat exchange relation with the organic fuel going into the catalyst bed.
A novel edge-preserving nonnegative matrix factorization method for spectral unmixing
NASA Astrophysics Data System (ADS)
Bao, Wenxing; Ma, Ruishi
2015-12-01
Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.
NASA Astrophysics Data System (ADS)
Tippawan, Phanicha; Arpornwichanop, Amornchai
2016-02-01
The hydrogen production process is known to be important to a fuel cell system. In this study, a carbon-free hydrogen production process is proposed by using a two-step ethanol-steam-reforming procedure, which consists of ethanol dehydrogenation and steam reforming, as a fuel processor in the solid oxide fuel cell (SOFC) system. An addition of CaO in the reformer for CO2 capture is also considered to enhance the hydrogen production. The performance of the SOFC system is analyzed under thermally self-sufficient conditions in terms of the technical and economic aspects. The simulation results show that the two-step reforming process can be run in the operating window without carbon formation. The addition of CaO in the steam reformer, which runs at a steam-to-ethanol ratio of 5, temperature of 900 K and atmospheric pressure, minimizes the presence of CO2; 93% CO2 is removed from the steam-reforming environment. This factor causes an increase in the SOFC power density of 6.62%. Although the economic analysis shows that the proposed fuel processor provides a higher capital cost, it offers a reducing active area of the SOFC stack and the most favorable process economics in term of net cost saving.
Compact hydrogen production systems for solid polymer fuel cells
NASA Astrophysics Data System (ADS)
Ledjeff-Hey, K.; Formanski, V.; Kalk, Th.; Roes, J.
Generally there are several ways to produce hydrogen gas from carbonaceous fuels like natural gas, oil or alcohols. Most of these processes are designed for large-scale industrial production and are not suitable for a compact hydrogen production system (CHYPS) in the power range of 1 kW. In order to supply solid polymer fuel cells (SPFC) with hydrogen, a compact fuel processor is required for mobile applications. The produced hydrogen-rich gas has to have a low level of harmful impurities; in particular the carbon monoxide content has to be lower than 20 ppmv. Integrating the reaction step, the gas purification and the heat supply leads to small-scale hydrogen production systems. The steam reforming of methanol is feasible at copper catalysts in a low temperature range of 200-350°C. The combination of a small-scale methanol reformer and a metal membrane as purification step forms a compact system producing high-purity hydrogen. The generation of a SPFC hydrogen fuel gas can also be performed by thermal or catalytic cracking of liquid hydrocarbons such as propane. At a temperature of 900°C the decomposition of propane into carbon and hydrogen takes place. A fuel processor based on this simple concept produces a gas stream with a hydrogen content of more than 90 vol.% and without CO and CO2.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
Linear unmixing of multidate hyperspectral imagery for crop yield estimation
USDA-ARS?s Scientific Manuscript database
In this paper, we have evaluated an unsupervised unmixing approach, vertex component analysis (VCA), for the application of crop yield estimation. The results show that abundance maps of the vegetation extracted by the approach are strongly correlated to the yield data (the correlation coefficients ...
A fast fully constrained geometric unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng
2014-11-01
A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.
Design of a Fuel Processor System for Generating Hydrogen for Automotive Applications
ERIC Educational Resources Information Center
Kolavennu, Panini K.; Telotte, John C.; Palanki, Srinivas
2006-01-01
The objective of this paper is to design a train of tubular reactors that use a methane feed to produce hydrogen of the desired purity so that it can be utilized by a fuel cell for automotive applications. Reaction engineering principles, which are typically covered at the undergraduate level, are utilized to design this reactor train. It is shown…
Fuel processor for fuel cell power system. [Conversion of methanol into hydrogen
Vanderborgh, N.E.; Springer, T.E.; Huff, J.R.
1986-01-28
A catalytic organic fuel processing apparatus, which can be used in a fuel cell power system, contains within a housing a catalyst chamber, a variable speed fan, and a combustion chamber. Vaporized organic fuel is circulated by the fan past the combustion chamber with which it is in indirect heat exchange relationship. The heated vaporized organic fuel enters a catalyst bed where it is converted into a desired product such as hydrogen needed to power the fuel cell. During periods of high demand, air is injected upstream of the combustion chamber and organic fuel injection means to burn with some of the organic fuel on the outside of the combustion chamber, and thus be in direct heat exchange relation with the organic fuel going into the catalyst bed.
USDA-ARS?s Scientific Manuscript database
This study evaluated linear spectral unmixing (LSU), mixture tuned matched filtering (MTMF) and support vector machine (SVM) techniques for detecting and mapping giant reed (Arundo donax L.), an invasive weed that presents a severe threat to agroecosystems and riparian areas throughout the southern ...
SOURCE APPORTIONMENT OF PHOENIX PM2.5 AEROSOL WITH THE UNMIX RECEPTOR MODEL
The multivariate receptor model Unmix has been used to analyze a 3-yr PM2.5 ambient aerosol data set collected in Phoenix, AZ, beginning in 1995. The analysis generated source profiles and overall percentage source contribution estimates (SCE) for five source categories: ga...
Fuels processing for transportation fuel cell systems
NASA Astrophysics Data System (ADS)
Kumar, R.; Ahmed, S.
Fuel cells primarily use hydrogen as the fuel. This hydrogen must be produced from other fuels such as natural gas or methanol. The fuel processor requirements are affected by the fuel to be converted, the type of fuel cell to be supplied, and the fuel cell application. The conventional fuel processing technology has been reexamined to determine how it must be adapted for use in demanding applications such as transportation. The two major fuel conversion processes are steam reforming and partial oxidation reforming. The former is established practice for stationary applications; the latter offers certain advantages for mobile systems and is presently in various stages of development. This paper discusses these fuel processing technologies and the more recent developments for fuel cell systems used in transportation. The need for new materials in fuels processing, particularly in the area of reforming catalysis and hydrogen purification, is discussed.
Assessment and comparison of 100-MW coal gasification phosphoric acid fuel cell power plants
NASA Technical Reports Server (NTRS)
Lu, Cheng-Yi
1988-01-01
One of the advantages of fuel cell (FC) power plants is fuel versatility. With changes only in the fuel processor, the power plant will be able to accept a variety of fuels. This study was performed to design process diagrams, evaluate performance, and to estimate cost of 100 MW coal gasifier (CG)/phosphoric acid fuel cell (PAFC) power plant systems utilizing coal, which is the largest single potential source of alternate hydrocarbon liquids and gases in the United States, as the fuel. Results of this study will identify the most promising integrated CG/PAFC design and its near-optimal operating conditions. The comparison is based on the performance and cost of electricity which is calculated under consistent financial assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sargent, S.A.; Pierson, T.R.; Steffe, J.F.
Apple juice processors generating up to 100 ton/day (90,718 kg/day) of pomace and incurring no disposal costs could not economically invest in a pile burning, fluidized-bed or suspension-fired system at present fossil fuel costs. Cost analysis is warranted for situations in which disposal costs are greater than $9.15/ton ($8.30/1000 kg) or in which fossil fuel price increases are expected in excess of 25%.
Encoding Strategy Changes and Spacing Effects in the Free Recall of Unmixed Lists
ERIC Educational Resources Information Center
Delaney, P.F.; Knowles, M.E.
2005-01-01
Memory for repeated items often improves when repetitions are separated by other items-a phenomenon called the spacing effect. In two experiments, we explored the complex interaction between study strategies, serial position, and spacing effects. When people studied several unmixed lists, they initially used mainly rote rehearsal, but some people…
Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Xiaorun; Zhao, Liaoying
2016-01-01
Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.
Multiphoton spectral analysis of benzo[a]pyrene uptake and metabolism in a rat liver cell line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhoumi, Rola, E-mail: rmouneimne@cvm.tamu.edu; Mouneimne, Youssef; Ramos, Ernesto
2011-05-15
Dynamic analysis of the uptake and metabolism of polycyclic aromatic hydrocarbons (PAHs) and their metabolites within live cells in real time has the potential to provide novel insights into genotoxic and non-genotoxic mechanisms of cellular injury caused by PAHs. The present work, combining the use of metabolite spectra generated from metabolite standards using multiphoton spectral analysis and an 'advanced unmixing process', identifies and quantifies the uptake, partitioning, and metabolite formation of one of the most important PAHs (benzo[a]pyrene, BaP) in viable cultured rat liver cells over a period of 24 h. The application of the advanced unmixing process resulted inmore » the simultaneous identification of 8 metabolites in live cells at any single time. The accuracy of this unmixing process was verified using specific microsomal epoxide hydrolase inhibitors, glucuronidation and sulfation inhibitors as well as several mixtures of metabolite standards. Our findings prove that the two-photon microscopy imaging surpasses the conventional fluorescence imaging techniques and the unmixing process is a mathematical technique that seems applicable to the analysis of BaP metabolites in living cells especially for analysis of changes of the ultimate carcinogen benzo[a]pyrene-r-7,t-8-dihydrodiol-t-9,10-epoxide. Therefore, the combination of the two-photon acquisition with the unmixing process should provide important insights into the cellular and molecular mechanisms by which BaP and other PAHs alter cellular homeostasis.« less
Generating High-Temporal and Spatial Resolution TIR Image Data
NASA Astrophysics Data System (ADS)
Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.
2017-09-01
Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.
Parallel ICA and its hardware implementation in hyperspectral image analysis
NASA Astrophysics Data System (ADS)
Du, Hongtao; Qi, Hairong; Peterson, Gregory D.
2004-04-01
Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.
M-estimation for robust sparse unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Toomik, Maria; Lu, Shijian; Nelson, James D. B.
2016-10-01
Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.
Unmixing the SNCs: Chemical, Isotopic, and Petrologic Components of the Martian Meteorites
NASA Technical Reports Server (NTRS)
2002-01-01
This volume contains abstracts that have been accepted for presentation at the conference on Unmixing the SNCs: Chemical, Isotopic, and Petrologic Components of Martian Meteorites, September 11-12, 2002, in Houston, Texas. Administration and publications support for this meeting were provided by the staff of the Publications and Program Services Department at the Lunar and Planetary Institute.
Spectral unmixing of hyperspectral data to map bauxite deposits
NASA Astrophysics Data System (ADS)
Shanmugam, Sanjeevi; Abhishekh, P. V.
2006-12-01
This paper presents a study about the potential of remote sensing in bauxite exploration in the Kolli hills of Tamilnadu state, southern India. ASTER image (acquired in the VNIR and SWIR regions) has been used in conjunction with SRTM - DEM in this study. A new approach of spectral unmixing of ASTER image data delineated areas rich in alumina. Various geological and geomorphological parameters that control bauxite formation were also derived from the ASTER image. All these information, when integrated, showed that there are 16 cappings (including the existing mines) that satisfy most of the conditions favouring bauxitization in the Kolli Hills. The study concludes that spectral unmixing of hyperspectral satellite data in the VNIR and SWIR regions may be combined with the terrain parameters to get accurate information about bauxite deposits, including their quality.
NASA Astrophysics Data System (ADS)
Farhad, Siamak; Yoo, Yeong; Hamdullahpur, Feridun
The performance of three solid oxide fuel cell (SOFC) systems, fuelled by biogas produced through anaerobic digestion (AD) process, for heat and electricity generation in wastewater treatment plants (WWTPs) is studied. Each system has a different fuel processing method to prevent carbon deposition over the anode catalyst under biogas fuelling. Anode gas recirculation (AGR), steam reforming (SR), and partial oxidation (POX) are the methods employed in systems I-III, respectively. A planar SOFC stack used in these systems is based on the anode-supported cells with Ni-YSZ anode, YSZ electrolyte and YSZ-LSM cathode, operated at 800 °C. A computer code has been developed for the simulation of the planar SOFC in cell, stack and system levels and applied for the performance prediction of the SOFC systems. The key operational parameters affecting the performance of the SOFC systems are identified. The effect of these parameters on the electrical and CHP efficiencies, the generated electricity and heat, the total exergy destruction, and the number of cells in SOFC stack of the systems are studied. The results show that among the SOFC systems investigated in this study, the AGR and SR fuel processor-based systems with electrical efficiency of 45.1% and 43%, respectively, are suitable to be applied in WWTPs. If the entire biogas produced in a WWTP is used in the AGR or SR fuel processor-based SOFC system, the electricity and heat required to operate the WWTP can be completely self-supplied and the extra electricity generated can be sold to the electrical grid.
NASA Technical Reports Server (NTRS)
Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan
2013-01-01
Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.
Development of new UV-I. I. Cerenkov Viewing Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuribara, Masayuki; Nemoto, Koshichi
1994-02-01
The Cerenkov glow images from boiling-water reactors (BWR) and pressurized-water reactors (PWR) irradiated fuel assemblies are generally used for inspections. However, sometimes it is difficult or impossible to identify the image by the conventional Cerenkov Viewing Device (CVD), because of the long cooling time and/or low burnup. Now a new UV-I.I. (Ultra-Violet light Image Intensifier) CVD has been developed, which can detect the very weak Cerenkov glow from spent fuel assemblies. As this new device uses the newly developed proximity focused type UV-I.I., Cerenkov photons are used efficiently, producing better quality Cerenkov glow images. Moreover, since the image is convertedmore » to a video signal, it is easy to improve the signal to noise ratio (S/N) by an image processor. The new CVD was tested at BWR and PWR power plants in Japan, with fuel burnups ranging from 6,200--33,000 MWD/MTU (megawatt days per metric ton of uranium) and cooling times ranging from 370 to 6,200 d. The tests showed that the new CVD is superior to the conventional STA/CRIEPI CVD, and could detect very feeble Cerenkov glow images using an image processor.« less
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
[Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].
Ai, Jian-chao; Wang, Ning; Yang, Jing
2014-09-01
The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.
Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization
Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.
2016-01-01
Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. PMID:26589467
NASA Astrophysics Data System (ADS)
Gu, Lingjia; Ren, Ruizhi; Zhao, Kai; Li, Xiaofeng
2014-01-01
The precision of snow parameter retrieval is unsatisfactory for current practical demands. The primary reason is because of the problem of mixed pixels that are caused by low spatial resolution of satellite passive microwave data. A snow passive microwave unmixing method is proposed in this paper, based on land cover type data and the antenna gain function of passive microwaves. The land cover type of Northeast China is partitioned into grass, farmland, bare soil, forest, and water body types. The component brightness temperatures (CBT), namely unmixed data, with 1 km data resolution are obtained using the proposed unmixing method. The snow depth determined by the CBT and three snow depth retrieval algorithms are validated through field measurements taken in forest and farmland areas of Northeast China in January 2012 and 2013. The results show that the overall of the retrieval precision of the snow depth is improved by 17% in farmland areas and 10% in forest areas when using the CBT in comparison with the mixed pixels. The snow cover results based on the CBT are compared with existing MODIS snow cover products. The results demonstrate that more snow cover information can be obtained with up to 86% accuracy.
Quadratic blind linear unmixing: A graphical user interface for tissue characterization.
Gutierrez-Navarro, O; Campos-Delgado, D U; Arce-Santana, E R; Jo, Javier A
2016-02-01
Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Real-time trajectory optimization on parallel processors
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1993-01-01
A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.
NASA Astrophysics Data System (ADS)
Lascu, I.; Harrison, R. J.
2016-12-01
First-order reversal curve (FORC) diagrams are a powerful method to characterise the hysteresis properties of magnetic grain ensembles. Methods of processing, analysis and simulation of FORC diagrams have developed rapidly over the past few years, dramatically expanding their utility within rock magnetic research. Here we announce the latest release of FORCinel (Version 3.0), which integrates many of these developments into a unified, user-friendly package running within Igor Pro (www.wavemetrics.com). FORCinel v. 3.0 can be downloaded from https://wserv4.esc.cam.ac.uk/nanopaleomag/. The release will be accompanied by a series of video tutorials outlining each of the new features, including: i) improved work flow, with unified smoothing approach; ii) increased processing speed using multiple processors; iii) control of output resolution, enabling large datasets (> 500 FORCs) to be smoothed in a matter of seconds; iv) load, process, analyse and average multiple FORC diagrams; v) load and process non-gridded data and data acquired on non-PMC systems; vi) improved method for exploring optimal smoothing parameters; vii) interactive and un-doable data-pretreatments; viii) automated detection and removal of measurement outliers; ix) improved interactive method for the generation and optimisation of colour scales; x) full integration with FORCem1 - supervised quantitative unmixing of FORC diagrams using principle component analysis (PCA); xi) full integration with FORCulator2 - micromagnetic simulation of FORC diagrams; xiii) simulate TRM acquisition using the kinetic Monte Carlo simulation algorithm of Shcherbakov3. 1. Lascu, I., Harrison, R.J., Li, Y., Muraszko, J.R., Channell, J.E.T., Piotrowski, A.M., Hodell, D.A., 2015. Magnetic unmixing of first-order reversal curve diagrams using principal component analysis. Geochemistry, Geophys. Geosystems 16, 2900-2915. 2. Harrison, R.J., Lascu, I., 2014. FORCulator: A micromagnetic tool for simulating first-order reversal curve diagrams. Geochemistry Geophys. Geosystems 15, 4671-4691. 3. Shcherbakov, V.P., Lamash, B.E., Sycheva, N.K., 1995. Monte-Carlo modelling of thermoremanence acquisition in interacting single-domain grains. Phys. Earth Planet. Inter. 87, 197-211.
Processing of thermionic power on an electrically propelled spacecraft
NASA Technical Reports Server (NTRS)
Macie, T. W.
1973-01-01
A study to define the power processing equipment required between a thermionic reactor and an array of mercury-ion thrusters for a nuclear electric propulsion system is reported. Observations and recommendations that resulted from this study were: (1) the preferred thermionic-fuel-element source voltages are 23 V or higher; (2) transistor characteristics exert a strong effect on power processor mass; (3) the power processor mass could be considerably reduced should the magnetic materials that exhibit low losses at high frequencies, that have a high Curie point, and that can operate at 15 to 20 kG become avaliable; (4) electrical component packaging on the radiator could reduce the area that is sensitive to meteoroid penetration, thereby reducing the meteoroid shielding mass requirement; (5) an experimental model of the power processor design should be built and tested to verify the efficiencies, masses, and all the automatic operational aspects of the design.
A Methodology for Distributing the Corporate Database.
ERIC Educational Resources Information Center
McFadden, Fred R.
The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…
Purifier-integrated methanol reformer for fuel cell vehicles
NASA Astrophysics Data System (ADS)
Han, Jaesung; Kim, Il-soo; Choi, Keun-Sup
We developed a compact, 3-kW, purifier-integrated modular reformer which becomes the building block of full-scale 30-kW or 50-kW methanol fuel processors for fuel cell vehicles. Our proprietary technologies regarding hydrogen purification by composite metal membrane and catalytic combustion by washcoated wire-mesh catalyst were combined with the conventional methanol steam-reforming technology, resulting in higher conversion, excellent quality of product hydrogen, and better thermal efficiency than any other systems using preferential oxidation. In this system, steam reforming, hydrogen purification, and catalytic combustion all take place in a single reactor so that the whole system is compact and easy to operate. Hydrogen from the module is ultrahigh pure (99.9999% or better), hence there is no power degradation of PEMFC stack due to contamination by CO. Also, since only pure hydrogen is supplied to the anode of the PEMFC stack, 100% hydrogen utilization is possible in the stack. The module produces 2.3 Nm 3/h of hydrogen, which is equivalent to 3 kW when PEMFC has 43% efficiency. Thermal efficiency (HHV of product H 2/HHV of MeOH in) of the module is 89% and the power density of the module is 0.77 kW/l. This work was conducted in cooperation with Hyundai Motor Company in the form of a Korean national project. Currently the module is under test with an actual fuel cell stack in order to verify its performance. Sooner or later a full-scale 30-kW system will be constructed by connecting these modules in series and parallel and will serve as the fuel processor for the Korean first fuel cell hybrid vehicle.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-06
...EPA published a direct final rule on October 9, 2012 to amend the definition of heating oil in 40 CFR 80.1401 in the Renewable Fuel Standard (``RFS'') program under section 211(o) of the Clean Air Act. The direct final rule also amended requirements under EPA's diesel sulfur program related to the sulfur content of locomotive and marine diesel fuel produced by transmix processors, and the fuel marker requirements for 500 ppm sulfur locomotive and marine (LM) diesel fuel to allow for solvent yellow 124 marker to transition out of the distribution system. Because EPA received adverse comments on the heating oil definition and transmix amendments, we are withdrawing those portions of the direct final rule. Because EPA did not receive adverse comments with respect to the yellow marker amendments, those amendments will become effective as indicated in the direct final rule.
Analysis of Forest Foliage Using a Multivariate Mixture Model
NASA Technical Reports Server (NTRS)
Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.
1997-01-01
Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.
Unmixing Space Object’s Moderate Resolution Spectra
2013-09-01
collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2013 2. REPORT TYPE 3. DATES COVERED 00...result of spectral unmixing. In the visible, the non- resolved spectral signature is modeled as a linear mixture of spectral reflectance signatures...1 (3) In (3), the first term expresses the Euclidian distance (l2) between the observed data and the forward model . The second term (l1
Unsupervised Unmixing of Hyperspectral Images Accounting for Endmember Variability.
Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2015-12-01
This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing, accounting for endmember variability. The pixels are modeled by a linear combination of endmembers weighted by their corresponding abundances. However, the endmembers are assumed random to consider their variability in the image. An additive noise is also considered in the proposed model, generalizing the normal compositional model. The proposed algorithm exploits the whole image to benefit from both spectral and spatial information. It estimates both the mean and the covariance matrix of each endmember in the image. This allows the behavior of each material to be analyzed and its variability to be quantified in the scene. A spatial segmentation is also obtained based on the estimated abundances. In order to estimate the parameters associated with the proposed Bayesian model, we propose to use a Hamiltonian Monte Carlo algorithm. The performance of the resulting unmixing strategy is evaluated through simulations conducted on both synthetic and real data.
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pu, Huangsheng; Zhang, Guanglei; He, Wei; Liu, Fei; Guang, Huizhi; Zhang, Yue; Bai, Jing; Luo, Jianwen
2014-09-01
It is a challenging problem to resolve and identify drug (or non-specific fluorophore) distribution throughout the whole body of small animals in vivo. In this article, an algorithm of unmixing multispectral fluorescence tomography (MFT) images based on independent component analysis (ICA) is proposed to solve this problem. ICA is used to unmix the data matrix assembled by the reconstruction results from MFT. Then the independent components (ICs) that represent spatial structures and the corresponding spectrum courses (SCs) which are associated with spectral variations can be obtained. By combining the ICs with SCs, the recovered MFT images can be generated and fluorophore concentration can be calculated. Simulation studies, phantom experiments and animal experiments with different concentration contrasts and spectrum combinations are performed to test the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can not only provide the spatial information of fluorophores, but also recover the actual reconstruction of MFT images.
Spectral Unmixing With Multiple Dictionaries
NASA Astrophysics Data System (ADS)
Cohen, Jeremy E.; Gillis, Nicolas
2018-02-01
Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Advanced Diesel Oil Fuel Processor Development
1986-06-01
water exit 29 sample quencher: gas sample line inlet 30 sample quencher: gas sample line exit 31 sample quencher: cooling water inlet 32 desulfuriser ...exit line 33, 34 desulfurimer 35 heat exchanger: process gas exit (to desulfuriser ) 38 shift reactor inlet (top) 37 shift reactor: cooling air exit
Long-term retention as a function of word concreteness under conditions of free recall.
Postman, L; Burns, S
1974-07-01
Acquisition and long-term retention of concrete (C) and abstract (A) words were investigated under conditions of multiple-trial free recall. Both unmixed and mixed lists were used in original learning. Retention was tested either 1 rain or 1 week after attainment of the learning criterion. Acquisition was faster and retention was higher for C than for A words. These differences were more pronounced for mixed than for unmixed lists.
Jonasson, U; Jonasson, B; Saldeen, T
1999-07-26
In Sweden, the frequency of fatal poisoning by dextropropoxyphene (DXP) ingestion is constantly high. There are seven preparations containing DXP on the Swedish market; in three of them DXP is the sole analgesic ingredient, while four of them are combinations of analgesics. In an attempt to assess the death rate attributable to each DXP preparation on the basis of toxicological analyses, altogether 834 cases of dextropropoxyphene-related death over a 5-year period (1992-1996) in Sweden have been reviewed. The ratio between number of fatal poisonings and prescription of defined daily dose/1000 inhabitants during a 12-month period (DDD) was determined. The highest ratio, 27, was attributed to unmixed preparations. The ratio for DXP + paracetamol-related deaths was 6.3, and for DXP + phenazone, 6.4, while the lowest ratio, 2, was found among the DXP + chlorzoxazone cases. The unmixed preparations, representing 26% of all DXP prescriptions during the study years, were implicated in 62% of the DXP fatalities, a considerable over-representation. Unmixed preparations, with their higher content of DXP, may be more attractive for many consumers because of their narcotic (euphoric) effects rather than for any analgetic superiority. Another possibility is that unmixed preparations may erroneously have been regarded as safer than when combined with paracetamol, as reports of poisoning with compounds containing DXP + paracetamol have been most frequently reported, probably due to their predominance on the market.
What Stroop tasks can tell us about selective attention from childhood to adulthood.
Wright, Barlow C
2017-08-01
A rich body of research concerns causes of Stroop effects plus applications of Stroop. However, several questions remain. We included assessment of errors with children and adults (N = 316), who sat either a task wherein each block employed only trials of one type (unmixed task) or where every block comprised of a mix of the congruent, neutral, and incongruent trials. Children responded slower than adults and made more errors on each task. Contrary to some previous studies, interference (the difference between neutral and incongruent condition) showed no reaction time (RT) differences by group or task, although there were differences in errors. By contrast, facilitation (the difference between neutral and congruent condition) was greater in children than adults, and greater on the unmixed task than the mixed task. After considering a number of theoretical accounts, we settle on the inadvertent word-reading hypothesis, whereby facilitation stems from children and the unmixed task promoting inadvertent reading particularly in the congruent condition. Stability of interference RT is explained by fixed semantic differences between neutral and incongruent conditions, for children versus adults and for unmixed versus mixed task. We conclude that utilizing two tasks together may reveal more about how attention is affected in other groups. © 2016 The Authors. British Journal of Psychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.
40 CFR 279.70 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... THE MANAGEMENT OF USED OIL Standards for Used Oil Fuel Marketers § 279.70 Applicability. (a) Any... forth in § 279.11. (b) The following persons are not marketers subject to this subpart: (1) Used oil... oil to processor/re-refiners who incidentally burn used oil are not marketers subject to this Subpart...
Onboard fuel reformers for fuel cell vehicles: Equilibrium, kinetic and system modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreutz, T.G.; Steinbugler, M.M.; Ogden, J.M.
1996-12-31
On-board reforming of liquid fuels to hydrogen for use in proton exchange membrane (PEM) fuel cell electric vehicles (FCEVs) has been the subject of numerous investigations. In many respects, liquid fuels represent a more attractive method of carrying hydrogen than compressed hydrogen itself, promising greater vehicle range, shorter refilling times, increased safety, and perhaps most importantly, utilization of the current fuel distribution infrastructure. The drawbacks of on-board reformers include their inherent complexity [for example a POX reactor includes: a fuel vaporizer, a reformer, water-gas shift reactors, a preferential oxidation (PROX) unit for CO cleanup, heat exchangers for thermal integration, sensorsmore » and controls, etc.], weight, and expense relative to compressed H{sub 2}, as well as degraded fuel cell performance due to the presence of inert gases and impurities in the reformate. Partial oxidation (POX) of automotive fuels is another alternative for hydrogen production. This paper provides an analysis of POX reformers and a fuel economy comparison of vehicles powered by on-board POX and SRM fuel processors.« less
Development of a Solid-Oxide Fuel Cell/Gas Turbine Hybrid System Model for Aerospace Applications
NASA Technical Reports Server (NTRS)
Freeh, Joshua E.; Pratt, Joseph W.; Brouwer, Jacob
2004-01-01
Recent interest in fuel cell-gas turbine hybrid applications for the aerospace industry has led to the need for accurate computer simulation models to aid in system design and performance evaluation. To meet this requirement, solid oxide fuel cell (SOFC) and fuel processor models have been developed and incorporated into the Numerical Propulsion Systems Simulation (NPSS) software package. The SOFC and reformer models solve systems of equations governing steady-state performance using common theoretical and semi-empirical terms. An example hybrid configuration is presented that demonstrates the new capability as well as the interaction with pre-existing gas turbine and heat exchanger models. Finally, a comparison of calculated SOFC performance with experimental data is presented to demonstrate model validity. Keywords: Solid Oxide Fuel Cell, Reformer, System Model, Aerospace, Hybrid System, NPSS
2011-04-01
Sensitive Dual Color In Vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase Laura...20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that...spectral emissions using a suitable spectral unmixing algorithm . This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future
Fuel-rich catalytic combustion: A fuel processor for high-speed propulsion
NASA Technical Reports Server (NTRS)
Brabbs, Theodore A.; Rollbuhler, R. James; Lezberg, Erwin A.
1990-01-01
Fuel-rich catalytic combustion of Jet-A fuel was studied over the equivalence ratio range 4.7 to 7.8, which yielded combustion temperatures of 1250 to 1060 K. The process was soot-free and the gaseous products were similar to those obtained in the iso-octane study. A carbon atom balance across the catalyst bed calculated for the gaseous products accounted for about 70 to 90 percent of the fuel carbon; the balance was condensed as a liquid in the cold trap. It was shown that 52 to 77 percent of the fuel carbon was C1, C2, and C3 molecules. The viability of using fuel-rich catalytic combustion as a technique for preheating a practical fuel to very high temperatures was demonstrated. Preliminary results from the scaled up version of the catalytic combustor produced a high-temperature fuel containing large amounts of hydrogen and carbon monoxide. The balance of the fuel was completely vaporized and in various stages of pyrolysis and oxidation. Visual observations indicate that there was no soot present.
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
System Design of a Natural Gas PEM Fuel Cell Power Plant for Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe Ferrall, Tim Rehg, Vesna Stanic
2000-09-30
The following conclusions are made based on this analysis effort: (1) High-temperature PEM data are not available; (2) Stack development effort for Phase II is required; (3) System results are by definition preliminary, mostly due to the immaturity of the high-temperature stack; other components of the system are relatively well defined; (4) The Grotthuss conduction mechanism yields the preferred system characteristics; the Grotthuss conduction mechanism is also much less technically mature than the vehicle mechanism; (5) Fuel processor technology is available today and can be procured for Phase II (steam or ATR); (6) The immaturity of high-temperature membrane technology requiresmore » that a robust system design be developed in Phase II that is capable of operating over a wide temperature and pressure range - (a) Unpressurized or Pressurized PEM (Grotthuss mechanism) at 140 C, Highest temperature most favorable, Lowest water requirement most favorable, Pressurized recommended for base loaded operation, Unpressurized may be preferred for load following; (b) Pressurized PEM (vehicle mechanism) at about 100 C, Pressure required for saturation, Fuel cell technology currently available, stack development required. The system analysis and screening evaluation resulted in the identification of the following components for the most promising system: (1) Steam reforming fuel processor; (2) Grotthuss mechanism fuel cell stack operating at 140 C; (3) Means to deliver system waste heat to a cogeneration unit; (4) Pressurized system utilizing a turbocompressor for a base-load power application. If duty cycling is anticipated, the benefits of compression may be offset due to complexity of control. In this case (and even in the base loaded case), the turbocompressor can be replaced with a blower for low-pressure operation.« less
Reactant gas composition for fuel cell potential control
Bushnell, Calvin L.; Davis, Christopher L.
1991-01-01
A fuel cell (10) system in which a nitrogen (N.sub.2) gas is used on the anode section (11) and a nitrogen/oxygen (N.sub.2 /O.sub.2) gaseous mix is used on the cathode section (12) to maintain the cathode at an acceptable voltage potential during adverse conditions occurring particularly during off-power conditions, for example, during power plant shutdown, start-up and hot holds. During power plant shutdown, the cathode section is purged with a gaseous mixture of, for example, one-half percent (0.5%) oxygen (O.sub.2) and ninety-nine and a half percent (99.5%) nitrogen (N.sub.2) supplied from an ejector (21) bleeding in air (24/28) into a high pressure stream (27) of nitrogen (N.sub.2) as the primary or majority gas. Thereafter the fuel gas in the fuel processor (31) and the anode section (11) is purged with nitrogen gas to prevent nickel (Ni) carbonyl from forming from the shift catalyst. A switched dummy electrical load (30) is used to bring the cathode potential down rapidly during the start of the purges. The 0.5%/99.5% O.sub.2 /N.sub.2 mixture maintains the cathode potential between 0.3 and 0.7 volts, and this is sufficient to maintain the cathode potential at 0.3 volts for the case of H.sub.2 diffusing to the cathode through a 2 mil thick electrolyte filled matrix and below 0.8 volts for no diffusion at open circuit conditions. The same high pressure gas source (20) is used via a "T" juncture ("T") to purge the anode section and its associated fuel processor (31).
Rotational Spectral Unmixing of Exoplanets: Degeneracies between Surface Colors and Geography
NASA Astrophysics Data System (ADS)
Fujii, Yuka; Lustig-Yaeger, Jacob; Cowan, Nicolas B.
2017-11-01
Unmixing the disk-integrated spectra of exoplanets provides hints about heterogeneous surfaces that we cannot directly resolve in the foreseeable future. It is particularly important for terrestrial planets with diverse surface compositions like Earth. Although previous work on unmixing the spectra of Earth from disk-integrated multi-band light curves appeared successful, we point out a mathematical degeneracy between the surface colors and their spatial distributions. Nevertheless, useful constraints on the spectral shape of individual surface types may be obtained from the premise that albedo is everywhere between 0 and 1. We demonstrate the degeneracy and the possible constraints using both mock data based on a toy model of Earth, as well as real observations of Earth. Despite the severe degeneracy, we are still able to recover an approximate albedo spectrum for an ocean. In general, we find that surfaces are easier to identify when they cover a large fraction of the planet and when their spectra approach zero or unity in certain bands.
Rotational Spectral Unmixing of Exoplanets: Degeneracies between Surface Colors and Geography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujii, Yuka; Lustig-Yaeger, Jacob; Cowan, Nicolas B., E-mail: yuka.fujii.ebihara@gmail.com
Unmixing the disk-integrated spectra of exoplanets provides hints about heterogeneous surfaces that we cannot directly resolve in the foreseeable future. It is particularly important for terrestrial planets with diverse surface compositions like Earth. Although previous work on unmixing the spectra of Earth from disk-integrated multi-band light curves appeared successful, we point out a mathematical degeneracy between the surface colors and their spatial distributions. Nevertheless, useful constraints on the spectral shape of individual surface types may be obtained from the premise that albedo is everywhere between 0 and 1. We demonstrate the degeneracy and the possible constraints using both mock datamore » based on a toy model of Earth, as well as real observations of Earth. Despite the severe degeneracy, we are still able to recover an approximate albedo spectrum for an ocean. In general, we find that surfaces are easier to identify when they cover a large fraction of the planet and when their spectra approach zero or unity in certain bands.« less
Rotational Spectral Unmixing of Exoplanets: Degeneracies Between Surface Colors and Geography
NASA Technical Reports Server (NTRS)
Fujii, Yuka; Lustig-Yaeger, Jacob; Cowan, Nicolas B.
2017-01-01
Unmixing the disk-integrated spectra of exoplanets provides hints about heterogeneous surfaces that we cannot directly resolve in the foreseeable future. It is particularly important for terrestrial planets with diverse surface compositions like Earth. Although previous work on unmixing the spectra of Earth from disk-integrated multi-band light curves appeared successful, we point out a mathematical degeneracy between the surface colors and their spatial distributions. Nevertheless, useful constraints on the spectral shape of individual surface types may be obtained from the premise that albedo is everywhere between 0 and 1. We demonstrate the degeneracy and the possible constraints using both mock data based on a toy model of Earth, as well as real observations of Earth. Despite the severe degeneracy, we are still able to recover an approximate albedo spectrum for an ocean. In general, we find that surfaces are easier to identify when they cover a large fraction of the planet and when their spectra approach zero or unity in certain bands.
NASA Technical Reports Server (NTRS)
Kaufman, A.
1982-01-01
The on-site system application analysis is summarized. Preparations were completed for the first test of a full-sized single cell. Emphasis of the methanol fuel processor development program shifted toward the use of commercial shell-and-tube heat exchangers. An improved method for predicting the carbon-monoxide tolerance of anode catalysts is described. Other stack support areas reported include improved ABA bipolar plate bonding technology, improved electrical measurement techniques for specification-testing of stack components, and anodic corrosion behavior of carbon materials.
Quantitative detection of settled dust over green canopy
NASA Astrophysics Data System (ADS)
Brook, Anna
2016-04-01
The main task of environmental and geoscience applications are efficient and accurate quantitative classification of earth surfaces and spatial phenomena. In the past decade, there has been a significant interest in employing hyperspectral unmixing in order to retrieve accurate quantitative information latent in hyperspectral imagery data. Recently, the ground-truth and laboratory measured spectral signatures promoted by advanced algorithms are proposed as a new path toward solving the unmixing problem of hyperspectral imagery in semi-supervised fashion. This paper suggests that the sensitivity of sparse unmixing techniques provides an ideal approach to extract and identify dust settled over/upon green vegetation canopy using hyperspectral airborne data. Atmospheric dust transports a variety of chemicals, some of which pose a risk to the ecosystem and human health (Kaskaoutis, et al., 2008). Many studies deal with the impact of dust on particulate matter (PM) and atmospheric pollution. Considering the potential impact of industrial pollutants, one of the most important considerations is the fact that suspended PM can have both a physical and a chemical impact on plants, soils, and water bodies. Not only can the particles covering surfaces cause physical distortion, but particles of diverse origin and different chemistries can also serve as chemical stressors and cause irreversible damage. Sediment dust load in an indoor environment can be spectrally assessed using reflectance spectroscopy (Chudnovsky and Ben-Dor, 2009). Small amounts of particulate pollution that may carry a signature of a forthcoming environmental hazard are of key interest when considering the effects of pollution. According to the most basic distribution dynamics, dust consists of suspended particulate matter in a fine state of subdivision that are raised and carried by wind. In this context, it is increasingly important to first, understand the distribution dynamics of pollutants, and subsequently develop dedicated tools and measures to control and monitor pollutants in the free environment. The earliest effect of settled polluted dust particles is not always reflected through poor conditions of vegetation or soils, or any visible damages. In most of the cases, it has a quite long accumulation process that graduates from a polluted condition to long-term environmental hazard. Although conducted experiments with pollutant analog powders under controlled conditions have tended to confirm the findings from field studies (Brook, 2014), a major criticism of all these experiments is their short duration. The resulting conclusion is that it is difficult, if not impossible, to determine the implications of long-term exposure to realistic concentrations of pollutants from such short-term studies. Hyperspectral remote sensing (HRS) has become a common tool for environmental and geoscience applications. HRS has promoted new opportunities for exploring a wide range of materials and evaluating a variety of natural processes due to its detailed, specific, and extensive information on spectral and spatial disseminations. Hyperspectral unmixing (HU) is the technique of presuming the category type, which constitutes the mix-pixel, and its mixing ratio (Keshava and Mustard, 2002). In general, the task of unmixing is to decompose the reflectance spectrum of each pixel into a set of endmembers or principal combined spectra and their corresponding abundances (Bioucas-Dias et al., 2012). This study suggests that the sensitivity of sparse unmixing techniques provides an ideal approach to extract and identify dust settled over/upon green vegetation canopy using hyperspectral airborne data. Among the available techniques, this study present results of seven linear and non-linear unmixing algorithms: 1) Non-negative Matrix Factorization (NMF), 2) L1 sparsity-constrained NMF (L1-NMF), 3) L1/2 sparsity-constrained NMF (L1/2-NMF), 4) Graph regularized NMF (G-NMF), 5) Structured Sparse NMF (SS-NMF), 6) Alternating Least-Square (ALS), and 2) Lin's Projected Gradient (LPG). The performance is evaluated on real hyperspectral imagery data via detailed experimental assessment. The study showed that in certain compression tasks content-adapted sparse representation is provided by state-of-the-art solutions. The NMF algorithm estimates endmembers that are used to remove spurious information. If computationally feasible, it should include interaction terms to make the model more flexible. The optimal NMF algorithms, such as ALS and LPG, are assumed to be the simplest methods that achieve the minimum error on the test set. In summary, this work shows that sediment dust can be assessed using airborne HSI data, making it a potentially powerful tool for environmental studies. References Keshava, N., Mustard, J. (2002). Spectral unmixing. IEEE Signal Process. Mag., 19(1), 44-57. Chudnovsky, A., & Ben-Dor, E. (2009). Reflectance spectroscopy as a tool for settled dust monitoring in office environment. International Journal of Environment and Waste Management, 4(1), 32-49. Brook, A. (2014). Quantitative Detection of Settled dust over Green Canopy using Sparse Unmixing of Airborne Hyperspectral Data. IEEE-Whispers 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, 2014, Switzerland, 4-8. Keshava, N., Mustard, J. (2002). Spectral unmixing. IEEE Signal Process. Mag., 19(1), 44-57. Bioucas-Dias et al. (2012). Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(2), 354 -379.
Spectral Unmixing Based Construction of Lunar Mineral Abundance Maps
NASA Astrophysics Data System (ADS)
Bernhardt, V.; Grumpe, A.; Wöhler, C.
2017-07-01
In this study we apply a nonlinear spectral unmixing algorithm to a nearly global lunar spectral reflectance mosaic derived from hyper-spectral image data acquired by the Moon Mineralogy Mapper (M3) instrument. Corrections for topographic effects and for thermal emission were performed. A set of 19 laboratory-based reflectance spectra of lunar samples published by the Lunar Soil Characterization Consortium (LSCC) were used as a catalog of potential endmember spectra. For a given spectrum, the multi-population population-based incremental learning (MPBIL) algorithm was used to determine the subset of endmembers actually contained in it. However, as the MPBIL algorithm is computationally expensive, it cannot be applied to all pixels of the reflectance mosaic. Hence, the reflectance mosaic was clustered into a set of 64 prototype spectra, and the MPBIL algorithm was applied to each prototype spectrum. Each pixel of the mosaic was assigned to the most similar prototype, and the set of endmembers previously determined for that prototype was used for pixel-wise nonlinear spectral unmixing using the Hapke model, implemented as linear unmixing of the single-scattering albedo spectrum. This procedure yields maps of the fractional abundances of the 19 endmembers. Based on the known modal abundances of a variety of mineral species in the LSCC samples, a conversion from endmember abundances to mineral abundances was performed. We present maps of the fractional abundances of plagioclase, pyroxene and olivine and compare our results with previously published lunar mineral abundance maps.
Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice
NASA Astrophysics Data System (ADS)
Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge
2007-07-01
Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.
Kannan, R; Ievlev, A V; Laanait, N; Ziatdinov, M A; Vasudevan, R K; Jesse, S; Kalinin, S V
2018-01-01
Many spectral responses in materials science, physics, and chemistry experiments can be characterized as resulting from the superposition of a number of more basic individual spectra. In this context, unmixing is defined as the problem of determining the individual spectra, given measurements of multiple spectra that are spatially resolved across samples, as well as the determination of the corresponding abundance maps indicating the local weighting of each individual spectrum. Matrix factorization is a popular linear unmixing technique that considers that the mixture model between the individual spectra and the spatial maps is linear. Here, we present a tutorial paper targeted at domain scientists to introduce linear unmixing techniques, to facilitate greater understanding of spectroscopic imaging data. We detail a matrix factorization framework that can incorporate different domain information through various parameters of the matrix factorization method. We demonstrate many domain-specific examples to explain the expressivity of the matrix factorization framework and show how the appropriate use of domain-specific constraints such as non-negativity and sum-to-one abundance result in physically meaningful spectral decompositions that are more readily interpretable. Our aim is not only to explain the off-the-shelf available tools, but to add additional constraints when ready-made algorithms are unavailable for the task. All examples use the scalable open source implementation from https://github.com/ramkikannan/nmflibrary that can run from small laptops to supercomputers, creating a user-wide platform for rapid dissemination and adoption across scientific disciplines.
Conceptual study of on orbit production of cryogenic propellants by water electrolysis
NASA Technical Reports Server (NTRS)
Moran, Matthew E.
1991-01-01
The feasibility is assessed of producing cryogenic propellants on orbit by water electrolysis in support of NASA's proposed Space Exploration Initiative (SEI) missions. Using this method, water launched into low earth orbit (LEO) would be split into gaseous hydrogen and oxygen by electrolysis in an orbiting propellant processor spacecraft. The resulting gases would then be liquified and stored in cryogenic tanks. Supplying liquid hydrogen and oxygen fuel to space vehicles by this technique has some possible advantages over conventional methods. The potential benefits are derived from the characteristics of water as a payload, and include reduced ground handling and launch risk, denser packaging, and reduced tankage and piping requirements. A conceptual design of a water processor was generated based on related previous studies, and contemporary or near term technologies required. Extensive development efforts would be required to adapt the various subsystems needed for the propellant processor for use in space. Based on the cumulative results, propellant production by on orbit water electrolysis for support of SEI missions is not recommended.
Towards implementation of cellular automata in Microbial Fuel Cells.
Tsompanas, Michail-Antisthenis I; Adamatzky, Andrew; Sirakoulis, Georgios Ch; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway's Game of Life as the 'benchmark' CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions-compared to silicon circuitry-between the different states during computation.
Towards implementation of cellular automata in Microbial Fuel Cells
Adamatzky, Andrew; Sirakoulis, Georgios Ch.; Greenman, John; Ieropoulos, Ioannis
2017-01-01
The Microbial Fuel Cell (MFC) is a bio-electrochemical transducer converting waste products into electricity using microbial communities. Cellular Automaton (CA) is a uniform array of finite-state machines that update their states in discrete time depending on states of their closest neighbors by the same rule. Arrays of MFCs could, in principle, act as massive-parallel computing devices with local connectivity between elementary processors. We provide a theoretical design of such a parallel processor by implementing CA in MFCs. We have chosen Conway’s Game of Life as the ‘benchmark’ CA because this is the most popular CA which also exhibits an enormously rich spectrum of patterns. Each cell of the Game of Life CA is realized using two MFCs. The MFCs are linked electrically and hydraulically. The model is verified via simulation of an electrical circuit demonstrating equivalent behaviours. The design is a first step towards future implementations of fully autonomous biological computing devices with massive parallelism. The energy independence of such devices counteracts their somewhat slow transitions—compared to silicon circuitry—between the different states during computation. PMID:28498871
Estimation of tissue optical parameters with hyperspectral imaging and spectral unmixing
NASA Astrophysics Data System (ADS)
Lu, Guolan; Qin, Xulei; Wang, Dongsheng; Chen, Zhuo G.; Fei, Baowei
2015-03-01
Early detection of oral cancer and its curable precursors can improve patient survival and quality of life. Hyperspectral imaging (HSI) holds the potential for noninvasive early detection of oral cancer. The quantification of tissue chromophores by spectral unmixing of hyperspectral images could provide insights for evaluating cancer progression. In this study, non-negative matrix factorization has been applied for decomposing hyperspectral images into physiologically meaningful chromophore concentration maps. The approach has been validated by computer-simulated hyperspectral images and in vivo tumor hyperspectral images from a head and neck cancer animal model.
NASA Astrophysics Data System (ADS)
Ogden, Joan M.; Steinbugler, Margaret M.; Kreutz, Thomas G.
All fuel cells currently being developed for near term use in electric vehicles require hydrogen as a fuel. Hydrogen can be stored directly or produced onboard the vehicle by reforming methanol, or hydrocarbon fuels derived from crude oil (e.g., gasoline, diesel, or middle distillates). The vehicle design is simpler with direct hydrogen storage, but requires developing a more complex refueling infrastructure. In this paper, we present modeling results comparing three leading options for fuel storage onboard fuel cell vehicles: (a) compressed gas hydrogen storage, (b) onboard steam reforming of methanol, (c) onboard partial oxidation (POX) of hydrocarbon fuels derived from crude oil. We have developed a fuel cell vehicle model, including detailed models of onboard fuel processors. This allows us to compare the vehicle performance, fuel economy, weight, and cost for various vehicle parameters, fuel storage choices and driving cycles. The infrastructure requirements are also compared for gaseous hydrogen, methanol and gasoline, including the added costs of fuel production, storage, distribution and refueling stations. The delivered fuel cost, total lifecycle cost of transportation, and capital cost of infrastructure development are estimated for each alternative. Considering both vehicle and infrastructure issues, possible fuel strategies leading to the commercialization of fuel cell vehicles are discussed.
NASA Astrophysics Data System (ADS)
Dhoble, Abhishek S.; Pullammanappallil, Pratap C.
2014-10-01
Waste treatment and management for manned long term exploratory missions to moon will be a challenge due to longer mission duration. The present study investigated appropriate digester technologies that could be used on the base. The effect of stirring, operation temperature, organic loading rate and reactor design on the methane production rate and methane yield was studied. For the same duration of digestion, the unmixed digester produced 20-50% more methane than mixed system. Two-stage design which separated the soluble components from the solids and treated them separately had more rapid kinetics than one stage system, producing the target methane potential in one-half the retention time than the one stage system. The two stage system degraded 6% more solids than the single stage system. The two stage design formed the basis of a prototype digester sized for a four-person crew during one year exploratory lunar mission.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Energy and exergy analysis of an ethanol reforming process for solid oxide fuel cell applications.
Tippawan, Phanicha; Arpornwichanop, Amornchai
2014-04-01
The fuel processor in which hydrogen is produced from fuels is an important unit in a fuel cell system. The aim of this study is to apply a thermodynamic concept to identify a suitable reforming process for an ethanol-fueled solid oxide fuel cell (SOFC). Three different reforming technologies, i.e., steam reforming, partial oxidation and autothermal reforming, are considered. The first and second laws of thermodynamics are employed to determine an energy demand and to describe how efficiently the energy is supplied to the reforming process. Effect of key operating parameters on the distribution of reforming products, such as H2, CO, CO2 and CH4, and the possibility of carbon formation in different ethanol reformings are examined as a function of steam-to-ethanol ratio, oxygen-to-ethanol ratio and temperatures at atmospheric pressure. Energy and exergy analysis are performed to identify the best ethanol reforming process for SOFC applications. Copyright © 2014 Elsevier Ltd. All rights reserved.
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... automatic sealing opening of the boot during fueling. There shall be no loss in the gas tightness of the... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at...
40 CFR 86.107-98 - Sampling and analytical system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... automatic sealing opening of the boot during fueling. There shall be no loss in the gas tightness of the... system (recorder and sensor) shall have an accuracy of ±3 °F (±1.7 °C). The recorder (data processor... ambient temperature sensors, connected to provide one average output, located 3 feet above the floor at...
Comparison of mechanized systems for thinning Ponderosa pine and mixed conifer stands
Bruce R. Hartsough; Joseph F. McNeel; Thomas A. Durston; Bryce J. Stokes
1994-01-01
We studied three systems for thinning pine plantations and naturally-regenerated stands on the Stanislaus National Forest, California. All three produced small sawlogs and fuel chips. The whole tree system consisted of a feller buncher, skidder, stroke processor, loader and chipper. The cut-to-length system included a harvester, forwarder, loader and chipper. A hybrid...
Comparison of mechanized systems for thinning Ponderosa pine and mixed conifer stands
Bruce R. Hartsough; Joseph F. McNeel; Thomas A. Durston; Bryce J. Stokes
1994-01-01
Three systems for thinning pine plantations and naturally-regenerated stands were studied. All three produced small sawlogs and fuel chips. The whole-tree system consisted of a feller buncher, skidder, stroke processor, loader, and chipper. The cut-to-length system included a harvester, forwarder, loader, and chipper. A hybrid system combined a feller buncher,...
Effect of high oleic acid soybean on seed oil, protein concentration, and yield
USDA-ARS?s Scientific Manuscript database
Soybeans with high oleic acid content are desired by oil processors because of their improved oxidative stability for broader use in food, fuel and other products. However, non-GMO high-oleic soybeans have tended to have low seed yield. The objective of this study was to test non-GMO, high-oleic s...
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
NASA Astrophysics Data System (ADS)
Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.
2016-05-01
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.
NASA Astrophysics Data System (ADS)
Yang, Jian; He, Yuhong
2017-02-01
Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.
NASA Astrophysics Data System (ADS)
Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.
2018-02-01
Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.
Malonza, I M; Tyndall, M W; Ndinya-Achola, J O; Maclean, I; Omar, S; MacDonald, K S; Perriens, J; Orle, K; Plummer, F A; Ronald, A R; Moses, S
1999-12-01
A randomized, double-blind, placebo-controlled clinical trial was conducted in Nairobi, Kenya, to compare single-dose ciprofloxacin with a 7-day course of erythromycin for the treatment of chancroid. In all, 208 men and 37 women presenting with genital ulcers clinically compatible with chancroid were enrolled. Ulcer etiology was determined using culture techniques for chancroid, serology for syphilis, and a multiplex polymerase chain reaction for chancroid, syphilis, and herpes simplex virus (HSV). Ulcer etiology was 31% unmixed chancroid, 23% unmixed syphilis, 16% unmixed HSV, 15% mixed etiology, and 15% unknown. For 111 participants with chancroid, cure rates were 92% with ciprofloxacin and 91% with erythromycin. For all study participants, the treatment failure rate was 15%, mostly related to ulcer etiologies of HSV infection or syphilis, and treatment failure was 3 times more frequent in human immunodeficiency virus-infected subjects than in others, mostly owing to HSV infection. Ciprofloxacin is an effective single-dose treatment for chancroid, but current recommendations for empiric therapy of genital ulcers may result in high treatment failure due to HSV infection.
Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy
NASA Astrophysics Data System (ADS)
Somers, Ben; Asner, Gregory P.
2013-10-01
The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
NASA Astrophysics Data System (ADS)
Kal, S.; Kasko, I.; Ryssel, H.
1995-10-01
The influence of ion-beam mixing on ultra-thin cobalt silicide (CoSi2) formation was investigated by characterizing the ion-beam mixed and unmixed CoSi2 films. A Ge+ ion-implantation through the Co film prior to silicidation causes an interface mixing of the cobalt film with the silicon substrate and results in improved silicide-to-silicon interface roughness. Rapid thermal annealing was used to form Ge+ ion mixed and unmixed thin CoSi2 layer from 10 nm sputter deposited Co film. The silicide films were characterized by secondary neutral mass spectroscopy, x-ray diffraction, tunneling electron microscopy (TEM), Rutherford backscattering, and sheet resistance measurements. The experi-mental results indicate that the final rapid thermal annealing temperature should not exceed 800°C for thin (<50 nm) CoSi2 preparation. A comparison of the plan-view and cross-section TEM micrographs of the ion-beam mixed and unmixed CoSi2 films reveals that Ge+ ion mixing (45 keV, 1 × 1015 cm-2) produces homogeneous silicide with smooth silicide-to-silicon interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couturier, Laurent, E-mail: laurent.couturier55@ho
The fine microstructure obtained by unmixing of a solid solution either by classical precipitation or spinodal decomposition is often characterized either by small angle scattering or atom probe tomography. This article shows that a common data analysis framework can be used to analyze data obtained from these two techniques. An example of the application of this common analysis is given for characterization of the unmixing of the Fe-Cr matrix of a 15-5 PH stainless steel during long-term ageing at 350 °C and 400 °C. A direct comparison of the Cr composition fluctuations amplitudes and characteristic lengths obtained with both techniquesmore » is made showing a quantitative agreement for the fluctuation amplitudes. The origin of the discrepancy remaining for the characteristic lengths is discussed. - Highlights: •Common analysis framework for atom probe tomography and small angle scattering •Comparison of same microstructural characteristics obtained using both techniques •Good correlation of Cr composition fluctuations amplitudes from both techniques •Good correlation of Cr composition fluctuations amplitudes with classic V parameter.« less
None
2018-05-01
A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.
South Carolina's timber industry-an assessment of timber product output and use, 1991
Tony G. Johnson; Edgar L. Davenport
1991-01-01
In 1991, roundwood output from South Carolina's forests totaled 508 million cubic feet, down 13 percent from 1989. Mill byproducts generated from primary processors declined an equal rate to 170 million cubic feet. Almost 100 percent of the residues were used, mostly for fuel and fiber products. Pulpwood remained the leading roundwood product at 250 million cubic...
NASA Astrophysics Data System (ADS)
Behrooz, Ali; Vasquez, Kristine O.; Waterman, Peter; Meganck, Jeff; Peterson, Jeffrey D.; Miller, Peter; Kempner, Joshua
2017-02-01
Intraoperative resection of tumors currently relies upon the surgeon's ability to visually locate and palpate tumor nodules. Undetected residual malignant tissue often results in the need for additional treatment or surgical intervention. The Solaris platform is a multispectral open-air fluorescence imaging system designed for translational fluorescence-guided surgery. Solaris supports video-rate imaging in four fixed fluorescence channels ranging from visible to near infrared, and a multispectral channel equipped with a liquid crystal tunable filter (LCTF) for multispectral image acquisition (520-620 nm). Identification of tumor margins using reagents emitting in the visible spectrum (400-650 nm), such as fluorescein isothiocyanate (FITC), present challenges considering the presence of auto-fluorescence from tissue and food in the gastrointestinal (GI) tract. To overcome this, Solaris acquires LCTF-based multispectral images, and by applying an automated spectral unmixing algorithm to the data, separates reagent fluorescence from tissue and food auto-fluorescence. The unmixing algorithm uses vertex component analysis to automatically extract the primary pure spectra, and resolves the reagent fluorescent signal using non-negative least squares. For validation, intraoperative in vivo studies were carried out in tumor-bearing rodents injected with FITC-dextran reagent that is primarily residing in malignant tissue 24 hours post injection. In the absence of unmixing, fluorescence from tumors is not distinguishable from that of surrounding tissue. Upon spectral unmixing, the FITC-labeled malignant regions become well defined and detectable. The results of these studies substantiate the multispectral power of Solaris in resolving FITC-based agent signal in deep tumor masses, under ambient and surgical light, and enhancing the ability to surgically resect them.
NASA Astrophysics Data System (ADS)
Kemper, Thomas; Sommer, Stefan
2004-10-01
Field and airborne hyperspectral data was used to map residual contamination after a mining accident, by applying spectral mixture modelling. Test case was the Aznalcollar Mine (Southern Spain) accident, where heavy metal bearing sludge from a tailings pond was distributed over large areas of the Guadiamar flood plain. Although the sludge and the contaminated topsoils have been removed mechanically in the whole affected area, still high abundance of pyritic material remained on the ground. During dedicated field campaigns in two subsequent years soil samples were collected for geochemical and spectral laboratory analysis and spectral field measurements were carried out in parallel to data acquisition with the HyMap sensor. A Variable Multiple Endmember Spectral Mixture Analysis (VMESMA) tool was used providing possibilities of multiple endmember unmixing, aiming to estimate the quantities and distribution of the remaining tailings material. A spectrally based zonal partition of the area was introduced to allow the application of different submodels to the selected areas. Based on an iterative feedback process, the unmixing performance could be improved in each stage until an optimum level was reached. The sludge abundances obtained by unmixing the hyperspectral spectral data were confirmed by the field observations and chemical measurements of samples taken in the area. The semi-quantitative sludge abundances of residual pyritic material could be transformed into quantitative information for an assessment of acidification risk and distribution of residual heavy metal contamination based on an artificial mixture experiment. The unmixing of the second year images allowed identification of secondary minerals of pyrite as indicators of pyrite oxidation and associated acidification.
Arctic lead detection using a waveform unmixing algorithm from CryoSat-2 data
NASA Astrophysics Data System (ADS)
Lee, S.; Im, J.
2016-12-01
Arctic areas consist of ice floes, leads, and polynyas. While leads and polynyas account for small parts in the Arctic Ocean, they play a key role in exchanging heat flux, moisture, and momentum between the atmosphere and ocean in wintertime because of their huge temperature difference In this study, a linear waveform unmixing approach was proposed to detect lead fraction. CryoSat-2 waveforms for pure leads, sea ice, and ocean were used as end-members based on visual interpretation of MODIS images coincident with CryoSat-2 data. The unmixing model produced lead, sea ice, and ocean abundances and a threshold (> 0.7) was applied to make a binary classification between lead and sea ice. The unmixing model produced better results than the existing models in the literature, which are based on simple thresholding approaches. The results were also comparable with our previous research using machine learning based models (i.e., decision trees and random forest). A monthly lead fraction was calculated, dividing the number of detected leads by the total number of measurements. The lead fraction around Beaufort Sea and Fram strait was high due to the anti-cyclonic rotation of Beaufort Gyre and the outflows of sea ice to the Atlantic. The lead fraction maps produced in this study were matched well with monthly lead fraction maps in the literature. The areas with thin sea ice identified from our previous research correspond to the high lead fraction areas in the present study. Furthermore, sea ice roughness from ASCAT scatterometer was compared to a lead fraction map to see the relationship between surface roughness and lead distribution.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Space Tug Avionics Definition Study. Volume 5: Cost and Programmatics
NASA Technical Reports Server (NTRS)
1975-01-01
The baseline avionics system features a central digital computer that integrates the functions of all the space tug subsystems by means of a redundant digital data bus. The central computer consists of dual central processor units, dual input/output processors, and a fault tolerant memory, utilizing internal redundancy and error checking. Three electronically steerable phased arrays provide downlink transmission from any tug attitude directly to ground or via TDRS. Six laser gyros and six accelerometers in a dodecahedron configuration make up the inertial measurement unit. Both a scanning laser radar and a TV system, employing strobe lamps, are required as acquisition and docking sensors. Primary dc power at a nominal 28 volts is supplied from dual lightweight, thermally integrated fuel cells which operate from propellant grade reactants out of the main tanks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DUNCAN, D.R.
The HANSF analysis tool is an integrated model considering phenomena inside a multi-canister overpack (MCO) spent nuclear fuel container such as fuel oxidation, convective and radiative heat transfer, and the potential for fission product release. This manual reflects the HANSF version 1.3.2, a revised version of 1.3.1. HANSF 1.3.2 was written to correct minor errors and to allow modeling of condensate flow on the MCO inner surface. HANSF 1.3.2 is intended for use on personal computers such as IBM-compatible machines with Intel processors running under Lahey TI or digital Visual FORTRAN, Version 6.0, but this does not preclude operation inmore » other environments.« less
Fighter Aircraft OBIGGS (On-Board Inert Gas Generator System) Study. Volume 2
1987-06-01
UNCLASSIFIED.UNLIMITED L SAME AS RPT. 0 OTIC USERS 0 UNCLASSIFIED 22m. NAME Or RESPONSIBLE INOIVIOUAL 22b TELEPHONE NUJN lER 22c OFFICE SYMBOL IflncI.de A C...Pressure Air System 53 3.2.1.6.11.3 Fuel Tank Vent System 54 3.2.1.6.11.4 Fuel Scrubbing System 54 3.2.1.6.12 Control/ Interface Processor 55 3.2.1.6.12.1...Flowmeters 60, 3.2.1.6.13.6 Motion Transducer 61 3.2.1.7 Interface Requirements 61 3.2.1.7.1 External Interfaces 61 3.2.1.7.1.1 External Systems
Terahertz spectral unmixing based method for identifying gastric cancer
NASA Astrophysics Data System (ADS)
Cao, Yuqi; Huang, Pingjie; Li, Xian; Ge, Weiting; Hou, Dibo; Zhang, Guangxin
2018-02-01
At present, many researchers are exploring biological tissue inspection using terahertz time-domain spectroscopy (THz-TDS) techniques. In this study, based on a modified hard modeling factor analysis method, terahertz spectral unmixing was applied to investigate the relationships between the absorption spectra in THz-TDS and certain biomarkers of gastric cancer in order to systematically identify gastric cancer. A probability distribution and box plot were used to extract the distinctive peaks that indicate carcinogenesis, and the corresponding weight distributions were used to discriminate the tissue types. The results of this work indicate that terahertz techniques have the potential to detect different levels of cancer, including benign tumors and polyps.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Robust Spectral Unmixing of Sparse Multispectral Lidar Waveforms using Gamma Markov Random Fields
Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus; ...
2017-05-10
Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less
NASA Astrophysics Data System (ADS)
Mahbub, Saabah B.; Succer, Peter; Gosnell, Martin E.; Anwaer, Ayad G.; Herbert, Benjamin; Vesey, Graham; Goldys, Ewa M.
2016-03-01
Extracting biochemical information from tissue autofluorescence is a promising approach to non-invasively monitor disease treatments at a cellular level, without using any external biomarkers. Our recently developed unsupervised hyperspectral unmixing by Dependent Component Analysis (DECA) provides robust and detailed metabolic information with proper account of intrinsic cellular heterogeneity. Moreover this method is compatible with established methods of fluorescent biomarker labelling. Recently adipose-derived stem cell (ADSC) - based therapies have been introduced for treating different diseases in animals and humans. ADSC have been shown promise in regenerative treatments for osteoarthritis and other bone and joint disorders. One of the mechanism of their action is their anti-inflammatory effects within osteoarthritic joints which aid the regeneration of cartilage. These therapeutic effects are known to be driven by secretions of different cytokines from the ADSCs. We have been using the hyperspectral unmixing techniques to study in-vitro the effects of ADSC-derived cytokine-rich secretions with the cartilage chip in both human and bovine samples. The study of metabolic effects of different cytokine treatment on different cartilage layers makes it possible to compare the merits of those treatments for repairing cartilage.
GPU implementation of the simplex identification via split augmented Lagrangian
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Nascimento, José M. P.
2015-10-01
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Sparsely-sampled hyperspectral stimulated Raman scattering microscopy: a theoretical investigation
NASA Astrophysics Data System (ADS)
Lin, Haonan; Liao, Chien-Sheng; Wang, Pu; Huang, Kai-Chih; Bouman, Charles A.; Kong, Nan; Cheng, Ji-Xin
2017-02-01
A hyperspectral image corresponds to a data cube with two spatial dimensions and one spectral dimension. Through linear un-mixing, hyperspectral images can be decomposed into spectral signatures of pure components as well as their concentration maps. Due to this distinct advantage on component identification, hyperspectral imaging becomes a rapidly emerging platform for engineering better medicine and expediting scientific discovery. Among various hyperspectral imaging techniques, hyperspectral stimulated Raman scattering (HSRS) microscopy acquires data in a pixel-by-pixel scanning manner. Nevertheless, current image acquisition speed for HSRS is insufficient to capture the dynamics of freely moving subjects. Instead of reducing the pixel dwell time to achieve speed-up, which would inevitably decrease signal-to-noise ratio (SNR), we propose to reduce the total number of sampled pixels. Location of sampled pixels are carefully engineered with triangular wave Lissajous trajectory. Followed by a model-based image in-painting algorithm, the complete data is recovered for linear unmixing. Simulation results show that by careful selection of trajectory, a fill rate as low as 10% is sufficient to generate accurate linear unmixing results. The proposed framework applies to any hyperspectral beam-scanning imaging platform which demands high acquisition speed.
On the Use of FOSS4G in Land Cover Fraction Estimation with Unmixing Algorithms
NASA Astrophysics Data System (ADS)
Kumar, U.; Milesi, C.; Raja, K.; Ganguly, S.; Wang, W.; Zhang, G.; Nemani, R. R.
2014-12-01
The popularity and usage of FOSS4G (FOSS for Geoinformatics) has increased drastically in the last two decades with increasing benefits that facilitate spatial data analysis, image processing, graphics and map production, spatial modeling and visualization. The objective of this paper is to use FOSS4G to implement and perform a quantitative analysis of three different unmixing algorithms: Constraint Least-Square (CLS), Unconstraint Least-Square, and Orthogonal Subspace Projection to estimate land cover (LC) fraction estimates from RS data. The LC fractions obtained by unmixing of mixed pixels represent mixture of more than one class per pixel rendering more accurate LC abundance estimates. The algorithms were implemented in C++ programming language with OpenCV package (http://opencv.org/) and boost C++ libraries (www.boost.org) in the NASA Earth Exchange at the NASA Advanced Supercomputing Facility. GRASS GIS was used for visualization of results and statistical analysis was carried in R in a Linux system environment. A set of global endmembers for substrate, vegetation and dark objects were used to unmix the data using the three algorithms and were compared with Singular Value decomposition unmixed outputs available in ENVI image processing software. First, computer simulated data of different signal to noise ratio were used to evaluate the algorithms. The second set of experiments was carried out in an agricultural set-up with a spectrally diverse collection of 11 Landsat-5 scenes (acquired in 2008) for an agricultural setup in Frenso, California and the ground data were collected on those specific dates when the satellite passed through the site. Finally, in the third set of experiments, a pair of coincident clear sky Landsat and World View 2 data for an urbanized area of San Francisco were used to assess the algorithm. Validation of the results using descriptive statistics, correlation coefficient (cc), RMSE, boxplot and bivariate distribution function indicated that with the computer simulated data, CLS was better than other techniques. With the real world data of an agricultural landscape, CLS was superior to other techniques with a mean absolute error for all four methods close to 7.3%. For the urban setup, CLS demonstrated highest average cc of 0.64 and lowest average RMSE of 0.19 for all the endmembers.
Fuel-Flexible Gasification-Combustion Technology for Production of H2 and Sequestration-Ready CO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parag Kulkarni; Jie Guan; Raul Subia
In the near future, the nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It is necessary to improve both the process efficiency and environmental impact of fossil fuel utilization including greenhouse gas management. GE Global Research (GEGR) investigated an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology with potential to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP technology offers the long-term potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions. GE was awarded a contract from U.S. DOEmore » NETL to investigate and develop the UFP technology. Work started on the Phase I program in October 2000 and on the Phase II effort in April 2005. In the UFP technology, coal, water and air are simultaneously converted into (1) hydrogen rich stream that can be utilized in fuel cells or turbines, (2) CO{sub 2} rich stream for sequestration, and (3) high temperature/pressure vitiated air stream to produce electricity in a gas turbine expander. The process produces near-zero emissions with an estimated efficiency higher than Integrated Gasification Combined Cycle (IGCC) process with conventional CO{sub 2} separation. The Phase I R&D program established the chemical feasibility of the major reactions of the integrated UFP technology through lab-, bench- and pilot-scale testing. A risk analysis session was carried out at the end of Phase I effort to identify the major risks in the UFP technology and a plan was developed to mitigate these risks in the Phase II of the program. The Phase II effort focused on three high-risk areas: economics, lifetime of solids used in the UFP process, and product gas quality for turbines (or the impact of impurities in the coal on the overall system). The economic analysis included estimating the capital cost as well as the costs of hydrogen and electricity for a full-scale UFP plant. These costs were benchmarked with IGCC polygen plants with similar level of CO{sub 2} capture. Based on the promising economic analysis comparison results (performed with the help from Worley Parsons), GE recommended a 'Go' decision in April 2006 to continue the experimental investigation of the UFP technology to address the remaining risks i.e. solids lifetime and the impact of impurities in the coal on overall system. Solids attrition and lifetime risk was addressed via bench-scale experiments that monitor solids performance over time and by assessing materials interactions at operating conditions. The product gas under the third reactor (high-temperature vitiated air) operating conditions was evaluated to assess the concentration of particulates, pollutants and other impurities relative to the specifications required for gas turbine feed streams. During this investigation, agglomeration of solids used in the UFP process was identified as a serious risk that impacts the lifetime of the solids and in turn feasibility of the UFP technology. The main causes of the solids agglomeration were the combination of oxygen transfer material (OTM) reduction at temperatures {approx}1000 C and interaction between OTM and CO{sub 2} absorbing material (CAM) at high operating temperatures (>1200 C). At the end of phase II, in March 2008, GEGR recommended a 'No-go' decision for taking the UFP technology to the next level of development, i.e. development of a 3-5 MW prototype system, at this time. GEGR further recommended focused materials development research programs on improving the performance and lifetime of solids materials used in UFP or chemical looping technologies. The scale-up activities would be recommended only after mitigating the risks involved with the agglomeration and overall lifetime of the solids. This is the final report for the phase II of the DOE-funded Vision 21 program entitled 'Fuel-Flexible Gasification-Combustion Technology for Production of H{sub 2} and Sequestration-Ready CO{sub 2}' (DOE Award No. DE-FC26-00NT40974). The report focuses on the major accomplishments and lessons learned in analyzing the risks of the novel UFP technology during Phase II of the DOE program.« less
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Heidebrecht, Kathleen B.
2001-01-01
Remote sensing of vegetation cover and condition is critically needed to understand the impacts of land use and climate variability in and and semi-arid regions. However, remote sensing of vegetation change in these environments is difficult for several reasons. First, individual plant canopies are typically small and do not reach the spatial scale of typical Landsat-like satellite image pixels. Second, the phenological status and subsequent dry carbon (or non-photosynthetic) fraction of plant canopies varies dramatically in both space and time throughout and and semi-arid regions. Detection of only the 'green' part of the vegetation using a metric such as the normalized difference vegetation index (NDVI) thus yields limited information on the presence and condition of plants in these ecosystems. Monitoring of both photosynthetic vegetation (PV) and non-photosynthetic vegetation (NPV) is needed to understand a range of ecosystem characteristics including vegetation presence, cover and abundance, physiological and biogeochemical functioning, drought severity, fire fuel load, disturbance events and recovery from disturbance.
Using Land Surface Phenology to Detect Land Use Change in the Northern Great Plains
NASA Astrophysics Data System (ADS)
Nguyen, L. H.; Henebry, G. M.
2017-12-01
The Northern Great Plains of the US have been undergoing many types of land cover / land use change over the past two decades, including expansion of irrigation, conversion of grassland to cropland, biofuels production, urbanization, and fossil fuel mining. Much of the literature on these changes has relied on post-classification change detection based on a limited number of observations per year. Here we demonstrate an approach to characterize land dynamics through land surface phenology (LSP) by synergistic use of image time series at two scales. Our study areas include regions of interest (ROIs) across the Northern Great Plains located within Landsat path overlap zones to boost the number of valid observations (free of clouds or snow) each year. We first compute accumulated growing degree-days (AGDD) from MODIS 8-day composites of land surface temperature (MOD11A2 and MYD11A2). Using Landsat Collection 1 surface reflectance-derived vegetation indices (NDVI, EVI), we then fit at each pixel a downward convex quadratic model linking the vegetation index to each year's progression of AGDD. This quadratic equation exhibits linearity in a mathematical sense; thus, the fitted models can be linearly mixed and unmixed using a set of LSP endmembers (defined by the fitted parameter coefficients of the quadratic model) that represent "pure" land cover types with distinct seasonal patterns found within the region, such as winter wheat, spring wheat, maize, soybean, sunflower, hay/pasture/grassland, developed/built-up, among others. Information about land cover corresponding to each endmember are provided by the NLCD (National Land Cover Dataset) and CDL (Cropland Data Layer). We use linear unmixing to estimate the likely proportion of each LSP endmember within particular areas stratified by latitude. By tracking the proportions over the 2001-2011 period, we can quantify various types of land transitions in the Northern Great Plains.
In-ground operation of Geothermic Fuel Cells for unconventional oil and gas recovery
NASA Astrophysics Data System (ADS)
Sullivan, Neal; Anyenya, Gladys; Haun, Buddy; Daubenspeck, Mark; Bonadies, Joseph; Kerr, Rick; Fischer, Bernhard; Wright, Adam; Jones, Gerald; Li, Robert; Wall, Mark; Forbes, Alan; Savage, Marshall
2016-01-01
This paper presents operating and performance characteristics of a nine-stack solid-oxide fuel cell combined-heat-and-power system. Integrated with a natural-gas fuel processor, air compressor, reactant-gas preheater, and diagnostics and control equipment, the system is designed for use in unconventional oil-and-gas processing. Termed a ;Geothermic Fuel Cell; (GFC), the heat liberated by the fuel cell during electricity generation is harnessed to process oil shale into high-quality crude oil and natural gas. The 1.5-kWe SOFC stacks are packaged within three-stack GFC modules. Three GFC modules are mechanically and electrically coupled to a reactant-gas preheater and installed within the earth. During operation, significant heat is conducted from the Geothermic Fuel Cell to the surrounding geology. The complete system was continuously operated on hydrogen and natural-gas fuels for ∼600 h. A quasi-steady operating point was established to favor heat generation (29.1 kWth) over electricity production (4.4 kWe). Thermodynamic analysis reveals a combined-heat-and-power efficiency of 55% at this condition. Heat flux to the geology averaged 3.2 kW m-1 across the 9-m length of the Geothermic Fuel Cell-preheater assembly. System performance is reviewed; some suggestions for improvement are proposed.
Method and apparatus for real-time measurement of fuel gas compositions and heating values
Zelepouga, Serguei; Pratapas, John M.; Saveliev, Alexei V.; Jangale, Vilas V.
2016-03-22
An exemplary embodiment can be an apparatus for real-time, in situ measurement of gas compositions and heating values. The apparatus includes a near infrared sensor for measuring concentrations of hydrocarbons and carbon dioxide, a mid infrared sensor for measuring concentrations of carbon monoxide and a semiconductor based sensor for measuring concentrations of hydrogen gas. A data processor having a computer program for reducing the effects of cross-sensitivities of the sensors to components other than target components of the sensors is also included. Also provided are corresponding or associated methods for real-time, in situ determination of a composition and heating value of a fuel gas.
Science& Technology Review November 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMahon, D
2003-11-01
This issue of Science & Technology Review covers the following topics: (1) We Will Always Need Basic Science--Commentary by Tomas Diaz de la Rubia; (2) When Semiconductors Go Nano--experiments and computer simulations reveal some surprising behavior of semiconductors at the nanoscale; (3) Retinal Prosthesis Provides Hope for Restoring Sight--A microelectrode array is being developed for a retinal prosthesis; (4) Maglev on the Development Track for Urban Transportation--Inductrack, a Livermore concept to levitate train cars using permanent magnets, will be demonstrated on a 120-meter-long test track; and (5) Power Plant on a Chip Moves Closer to Reality--Laboratory-designed fuel processor gives powermore » boost to dime-size fuel cell.« less
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
NASA Astrophysics Data System (ADS)
Deng, Junjun; Zhang, Yanru; Qiu, Yuqing; Zhang, Hongliang; Du, Wenjiao; Xu, Lingling; Hong, Youwei; Chen, Yanting; Chen, Jinsheng
2018-04-01
Source apportionment of fine particulate matter (PM2.5) were conducted at the Lin'an Regional Atmospheric Background Station (LA) in the Yangtze River Delta (YRD) region in China from July 2014 to April 2015 with three receptor models including principal component analysis combining multiple linear regression (PCA-MLR), UNMIX and Positive Matrix Factorization (PMF). The model performance, source identification and source contribution of the three models were analyzed and inter-compared. Source apportionment of PM2.5 was also conducted with the receptor models. Good correlations between the reconstructed and measured concentrations of PM2.5 and its major chemical species were obtained for all models. PMF resolved almost all masses of PM2.5, while PCA-MLR and UNMIX explained about 80%. Five, four and seven sources were identified by PCA-MLR, UNMIX and PMF, respectively. Combustion, secondary source, marine source, dust and industrial activities were identified by all the three receptor models. Combustion source and secondary source were the major sources, and totally contributed over 60% to PM2.5. The PMF model had a better performance on separating the different combustion sources. These findings improve the understanding of PM2.5 sources in background region.
Accuracy assessment of linear spectral mixture model due to terrain undulation
NASA Astrophysics Data System (ADS)
Wang, Tianxing; Chen, Songlin; Ma, Ya
2008-12-01
Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Assessing FRET using Spectral Techniques
Leavesley, Silas J.; Britain, Andrea L.; Cichon, Lauren K.; Nikolaev, Viacheslav O.; Rich, Thomas C.
2015-01-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein–protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP–Epac–YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. PMID:23929684
Assessing FRET using spectral techniques.
Leavesley, Silas J; Britain, Andrea L; Cichon, Lauren K; Nikolaev, Viacheslav O; Rich, Thomas C
2013-10-01
Förster resonance energy transfer (FRET) techniques have proven invaluable for probing the complex nature of protein-protein interactions, protein folding, and intracellular signaling events. These techniques have traditionally been implemented with the use of one or more fluorescence band-pass filters, either as fluorescence microscopy filter cubes, or as dichroic mirrors and band-pass filters in flow cytometry. In addition, new approaches for measuring FRET, such as fluorescence lifetime and acceptor photobleaching, have been developed. Hyperspectral techniques for imaging and flow cytometry have also shown to be promising for performing FRET measurements. In this study, we have compared traditional (filter-based) FRET approaches to three spectral-based approaches: the ratio of acceptor-to-donor peak emission, linear spectral unmixing, and linear spectral unmixing with a correction for direct acceptor excitation. All methods are estimates of FRET efficiency, except for one-filter set and three-filter set FRET indices, which are included for consistency with prior literature. In the first part of this study, spectrofluorimetric data were collected from a CFP-Epac-YFP FRET probe that has been used for intracellular cAMP measurements. All comparisons were performed using the same spectrofluorimetric datasets as input data, to provide a relevant comparison. Linear spectral unmixing resulted in measurements with the lowest coefficient of variation (0.10) as well as accurate fits using the Hill equation. FRET efficiency methods produced coefficients of variation of less than 0.20, while FRET indices produced coefficients of variation greater than 8.00. These results demonstrate that spectral FRET measurements provide improved response over standard, filter-based measurements. Using spectral approaches, single-cell measurements were conducted through hyperspectral confocal microscopy, linear unmixing, and cell segmentation with quantitative image analysis. Results from these studies confirmed that spectral imaging is effective for measuring subcellular, time-dependent FRET dynamics and that additional fluorescent signals can be readily separated from FRET signals, enabling multilabel studies of molecular interactions. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.
Emission spectra profiling of fluorescent proteins in living plant cells
2013-01-01
Background Fluorescence imaging at high spectral resolution allows the simultaneous recording of multiple fluorophores without switching optical filters, which is especially useful for time-lapse analysis of living cells. The collected emission spectra can be used to distinguish fluorophores by a computation analysis called linear unmixing. The availability of accurate reference spectra for different fluorophores is crucial for this type of analysis. The reference spectra used by plant cell biologists are in most cases derived from the analysis of fluorescent proteins in solution or produced in animal cells, although these spectra are influenced by both the cellular environment and the components of the optical system. For instance, plant cells contain various autofluorescent compounds, such as cell wall polymers and chlorophyll, that affect the spectral detection of some fluorophores. Therefore, it is important to acquire both reference and experimental spectra under the same biological conditions and through the same imaging systems. Results Entry clones (pENTR) of fluorescent proteins (FPs) were constructed in order to create C- or N-terminal protein fusions with the MultiSite Gateway recombination technology. The emission spectra for eight FPs, fused C-terminally to the A- or B-type cyclin dependent kinases (CDKA;1 and CDKB1;1) and transiently expressed in epidermal cells of tobacco (Nicotiana benthamiana), were determined by using the Olympus FluoView™ FV1000 Confocal Laser Scanning Microscope. These experimental spectra were then used in unmixing experiments in order to separate the emission of fluorophores with overlapping spectral properties in living plant cells. Conclusions Spectral imaging and linear unmixing have a great potential for efficient multicolor detection in living plant cells. The emission spectra for eight of the most commonly used FPs were obtained in epidermal cells of tobacco leaves and used in unmixing experiments. The generated set of FP Gateway entry vectors represents a valuable resource for plant cell biologists. PMID:23552272
Estimating forest species abundance through linear unmixing of CHRIS/PROBA imagery
NASA Astrophysics Data System (ADS)
Stagakis, Stavros; Vanikiotis, Theofilos; Sykioti, Olga
2016-09-01
The advancing technology of hyperspectral remote sensing offers the opportunity of accurate land cover characterization of complex natural environments. In this study, a linear spectral unmixing algorithm that incorporates a novel hierarchical Bayesian approach (BI-ICE) was applied on two spatially and temporally adjacent CHRIS/PROBA images over a forest in North Pindos National Park (Epirus, Greece). The scope is to investigate the potential of this algorithm to discriminate two different forest species (i.e. beech - Fagus sylvatica, pine - Pinus nigra) and produce accurate species-specific abundance maps. The unmixing results were evaluated in uniformly distributed plots across the test site using measured fractions of each species derived by very high resolution aerial orthophotos. Landsat-8 images were also used to produce a conventional discrete-type classification map of the test site. This map was used to define the exact borders of the test site and compare the thematic information of the two mapping approaches (discrete vs abundance mapping). The required ground truth information, regarding training and validation of the applied mapping methodologies, was collected during a field campaign across the study site. Abundance estimates reached very good overall accuracy (R2 = 0.98, RMSE = 0.06). The most significant source of error in our results was due to the shadowing effects that were very intense in some areas of the test site due to the low solar elevation during CHRIS acquisitions. It is also demonstrated that the two mapping approaches are in accordance across pure and dense forest areas, but the conventional classification map fails to describe the natural spatial gradients of each species and the actual species mixture across the test site. Overall, the BI-ICE algorithm presented increased potential to unmix challenging objects with high spectral similarity, such as different vegetation species, under real and not optimum acquisition conditions. Its full potential remains to be investigated in further and more complex study sites in view of the upcoming satellite hyperspectral missions.
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
NASA Astrophysics Data System (ADS)
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.
High Fidelity Simulations of Unsteady Flow through Turbopumps and Flowliners
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, dochan; Chan, William; Housman, Jeff
2006-01-01
High fidelity computations were carried out to analyze the orbiter LH2 feedline flowliner. Computations were performed on the Columbia platform which is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processor each. Various computational models were used to characterize the unsteady flow features in the turbopump, including the orbiter Low-Pressure-Fuel-Turbopump (LPFTP) inducer, the orbiter manifold and a test article used to represent the manifold. Unsteady flow originating from the orbiter LPFTP inducer is one of the major contributors to the high frequency cyclic loading that results in high cycle fatigue damage to the gimbal flowliners just upstream of the LPFTP. The flow fields for the orbiter manifold and representative test article are computed and analyzed for similarities and differences. The incompressible Navier-Stokes flow solver INS3D, based on the artificial compressibility method, was used to compute the flow of liquid hydrogen in each test article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Maksudul M.; Sampathkumaran, Uma
The present invention relates to a modular chemiresistive sensor. In particular, a modular chemiresistive sensor for hypergolic fuel and oxidizer leak detection, carbon dioxide monitoring and detection of disease biomarkers. The sensor preferably has two gold or platinum electrodes mounted on a silicon substrate where the electrodes are connected to a power source and are separated by a gap of 0.5 to 4.0 .mu.M. A polymer nanowire or carbon nanotube spans the gap between the electrodes and connects the electrodes electrically. The electrodes are further connected to a circuit board having a processor and data storage, where the processor canmore » measure current and voltage values between the electrodes and compare the current and voltage values with current and voltage values stored in the data storage and assigned to particular concentrations of a pre-determined substance such as those listed above or a variety of other substances.« less
NASA Technical Reports Server (NTRS)
Veyo, S.E.
1997-01-01
This report describes the successful testing of a 27 kWe Solid Oxide Fuel Cell (SOFC) generator fueled by natural gas and/or a fuel gas produced by a brassboard logistics fuel preprocessor (LFP). The test period began on May 24, 1995 and ended on February 26, 1996 with the successful completion of all program requirements and objectives. During this time period, this power system produced 118.2 MWh of electric power. No degradation of the generator's performance was measured after 5582 accumulated hours of operation on these fuels: local natural gas - 3261 hours, jet fuel reformate gas - 766 hours, and diesel fuel reformate gas - 1555 hours. This SOFC generator was thermally cycled from full operating temperature to room temperature and back to operating temperature six times, because of failures of support system components and the occasional loss of test site power, without measurable cell degradation. Numerous outages of the LFP did not interrupt the generator's operation because the fuel control system quickly switched to local natural gas when an alarm indicated that the LFP reformate fuel supply had been interrupted. The report presents the measured electrical performance of the generator on all three fuel types and notes the small differences due to fuel type. Operational difficulties due to component failures are well documented even though they did not affect the overall excellent performance of this SOFC power generator. The final two appendices describe in detail the LFP design and the operating history of the tested brassboard LFP.
Turbulent unmixing: how marine turbulence drives patchy distributions of motile phytoplankton
NASA Astrophysics Data System (ADS)
Durham, William; Climent, Eric; Barry, Michael; de Lillo, Filippo; Boffetta, Guido; Cencini, Massimo; Stocker, Roman
2013-11-01
Centimeter-scale patchiness in the distribution of phytoplankton increases the efficacy of many important ecological interactions in the marine food web. We show that turbulent fluid motion, usually synonymous with mixing, instead triggers intense small-scale patchiness in the distribution of motile phytoplankton. We use a suite of experiments, direct numerical simulations of turbulence, and analytical tools to show that turbulent shear and acceleration directs the motility of cells towards well-defined regions of flow, increasing local cell concentrations more than ten fold. This motility-driven `unmixing' offers an explanation for why motile cells are often more patchily distributed than non-motile cells and provides a mechanistic framework to understand how turbulence, whose strength varies profoundly in marine environments, impacts ocean productivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calder, Stuart A; Cao, Guixin; Okamoto, Satoshi
The J_eff=1/2 state is manifested in systems with large cubic crystal field splitting and spin-orbit coupling that are comparable to the on-site Coulomb interaction, U. 5d transition metal oxides host parameters in this regime and strong evidence for this state in Sr2IrO4, and additional iridates, has been presented. All the candidates, however, deviate from the cubic crystal field required to provide an unmixed canonical J_eff=1/2 state, impacting the development of a robust model of this novel insulating and magnetic state. We present experimental and theoretical results that not only show Ca4IrO6 hosts the state, but furthermore uniquely resides in themore » limit required for a canonical unmixed J_eff=1/2 state.« less
Saraee, Hossein Soukht; Jafarmadar, Samad; Kheyrollahi, Javad; Hosseinpour, Alireza
2018-03-01
In this study, methyl ester of Sisymbrium plant seed oil with the chemical formula of C 18 H 34 O 2 is produced for the first time, with the aid of ultrasonic waves and in the presence of a nanocatalyst. After measuring its characteristics and comparing with ASTM standard, it is tested and evaluated with different ratios of diesel fuel in a single-cylinder diesel engine. The reactions are accomplished in a flask by an ultrasonic processor unit and in the presence of CaO-MgO nanocatalyst. The engine tests were conducted based on the engine short time experiment. The results showed that with the increment of biodiesel ratio in the fuel blend, pollutants level of CO, HC, and smoke opacity are decreased comparing diesel fuel due to the improvement of the combustion process, and the amount of NOx emission is increased owing to high pressure and temperature of the combustion chamber. Also, produced biodiesel fuel causes an increment in the fuel consumption and exhaust gasses temperature. Overall, with regard to its effects on the engine and also being a native and easy cultivation plant, it can be resulted that Sisymbrium oil biodiesel and its blends with diesel fuel can be applied as an alternative fuel.
Xie, Dengfeng; Zhang, Jinshui; Zhu, Xiufang; Pan, Yaozhong; Liu, Hongli; Yuan, Zhoumiqi; Yun, Ya
2016-02-05
Remote sensing technology plays an important role in monitoring rapid changes of the Earth's surface. However, sensors that can simultaneously provide satellite images with both high temporal and spatial resolution haven't been designed yet. This paper proposes an improved spatial and temporal adaptive reflectance fusion model (STARFM) with the help of an Unmixing-based method (USTARFM) to generate the high spatial and temporal data needed for the study of heterogeneous areas. The results showed that the USTARFM had higher accuracy than STARFM methods in two aspects of analysis: individual bands and of heterogeneity analysis. Taking the predicted NIR band as an example, the correlation coefficients (r) for the USTARFM, STARFM and unmixing methods were 0.96, 0.95, 0.90, respectively (p-value < 0.001); Root Mean Square Error (RMSE) values were 0.0245, 0.0300, 0.0401, respectively; and ERGAS values were 0.5416, 0.6507, 0.8737, respectively. The USTARM showed consistently higher performance than STARM when the degree of heterogeneity ranged from 2 to 10, highlighting that the use of this method provides the capacity to solve the data fusion problems faced when using STARFM. Additionally, the USTARFM method could help researchers achieve better performance than STARFM at a smaller window size from its heterogeneous land surface quantitative representation.
Electric Fuel Pump Condition Monitor System Using Electricalsignature Analysis
Haynes, Howard D [Knoxville, TN; Cox, Daryl F [Knoxville, TN; Welch, Donald E [Oak Ridge, TN
2005-09-13
A pump diagnostic system and method comprising current sensing probes clamped on electrical motor leads of a pump for sensing only current signals on incoming motor power, a signal processor having a means for buffering and anti-aliasing current signals into a pump motor current signal, and a computer having a means for analyzing, displaying, and reporting motor current signatures from the motor current signal to determine pump health using integrated motor and pump diagnostic parameters.
Joint Long-Range Energy Study for Greater Fairbanks Military Complex
2005-02-01
be viewed as a two - stage processor of a fuel or feedstock. The feedstock is first gasified using high-temperature plasma heating sys- tems at...Coal-Fired Boilers with New Circulating Fluidized- Bed Boilers (CFBs). EAFB anticipates replacing two current boilers with two new boilers. This...definition to support DD Form 1391 budget level cost estimates for new coal-fired CHPPs at FWA and EAFB and for two new coal-fired CFBs at EAFB • update
Metal Oxide/Zeolite Combination Absorbs H2S
NASA Technical Reports Server (NTRS)
Voecks, Gerald E.; Sharma, Pramod K.
1989-01-01
Mixed copper and molybdenum oxides supported in pores of zeolite found to remove H2S from mixture of gases rich in hydrogen and steam, at temperatures from 256 to 538 degree C. Absorber of H2S needed to clean up gas streams from fuel processors that incorporate high-temperature steam reformers or hydrodesulfurizing units. Zeolites chosen as supporting materials because of their high porosity, rigidity, alumina content, and variety of both composition and form.
Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach.
Arrieta, Jorge; Cartwright, Julyan H E; Gouillart, Emmanuelle; Piro, Nicolas; Piro, Oreste; Tuval, Idan
2015-01-01
Mixing fluid in a container at low Reynolds number--in an inertialess environment--is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the "belly phase," peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing.
Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach
Arrieta, Jorge; Cartwright, Julyan H. E.; Gouillart, Emmanuelle; Piro, Nicolas; Piro, Oreste; Tuval, Idan
2015-01-01
Mixing fluid in a container at low Reynolds number— in an inertialess environment—is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the “belly phase,” peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing. PMID:26154384
The PEMFC-integrated CO oxidation — a novel method of simplifying the fuel cell plant
NASA Astrophysics Data System (ADS)
Rohland, Bernd; Plzak, Vojtech
Natural gas and methanol are the most economical fuels for residential fuel cell power generators as well as for mobile PEM-fuel cells. However, they have to be reformed with steam into hydrogen, which is to be cleaned from CO by shift-reaction and by partial oxidation to a level of no more than 30 ppm CO. This level is set by the Pt/Ru-C-anode of the PEMFC. A higher partial oxidation reaction rate for CO than those of Pt/Ru-C can be achieved in an oxidic Au-catalyst system. In the Fe 2O 3-Au system, a reaction rate of 2·10 -3 mol CO/s g Au at 1000 ppm CO and 5% "air bleed" at 80°C is achieved. This high rate allows to construct a catalyst-sheet for each cell within a PEMFC-stack. Practical and theoretical current/voltage characteristics of PEMFCs with catalyst-sheet are presented at 1000 ppm CO in hydrogen with 5% "air bleed". This gives the possibility of simplifying the gas processor of the plant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, Tryg; Spierling, Ruth; Poole, Kyle
The objective of this project was to develop and demonstrate methods of recycling of water and nutrients for algal biofuels production. Recycling was accomplished both internal to the system and, in a broader sense, through import and reuse of municipal wastewater. Such an integrated system with wastewater input had not been demonstrated previously, and the performance was unknown, particularly in terms of influence of recycling on algal productivity and the practical extent of nutrient recovery from biomass residuals. Through long-term laboratory and pilot research, the project resulted in the following: 1. Bench-scale pretreatment of algal biomass did not sufficiently increasemore » methane yield of nutrient solubilization during anaerobic digestion to warrant incorporation of pre-treatment into the pilot plant. The trial pretreatments were high-pressure orifice homogenization, sonication, and two types of heat treatment. 2. Solubilization of biomass particulate nutrients by lab anaerobic digesters ranged from 20% to nearly 60% for N and 40-65% for P. Subsequent aerobic degradation of the anaerobically digested biomass simulated raceways receiving whole digestate and resulted in an additional 20-55% N solubilization and additional 20% P solubilization. 3. Comparisons of laboratory and pilot digesters showed that laboratory units were reasonable proxies for pilot-scale. 4. Pilot-scale anaerobic digesters were designed, installed, and operated to digest algal biomass. Nutrient re-solubilization by the digesters was monitored and whole digestate was successfully used as a fertilizer in pilot algae raceways. 5. Unheated, unmixed digesters achieved greater methane yield and nutrient solubilization than heated, mixed digesters, presumably due to longer the solids residence times in unmixed digesters. The unmixed, unheated pilot digesters yielded 0.16 L CH4/g volatile solids (VS) introduced with 0.15 g VS/L-d organic loading and 16oC average temperature. A conventional heated mixed lab digester yielded 0.22 L CH4/g VS with 0.25 g VS/L-d and 30oC. The highest yield (0.30 L CH4/g VS) was achieved by the unmixed lab digesters operated at a constant 20oC. All digesters were operated with a 40-d hydraulic residence time. 6. In general, 50-75% of initial particulate N and P could be solubilized during anaerobic digestion and available for subsequent rounds of algae cultivation. 7. Bench-scale experiments showed the recovery from hydrothermal liquefaction (HTL) wastewater of carbon via anaerobic digestion and of nutrients to grow algae. To satisfy the nitrogen demand of algae cultivation, HTL wastewater would be diluted 400-fold, which was found to eliminate inhibition of algae growth by HTL wastewater. 8. Anaerobic digestion methane yield was lower for algal biomass containing coagulants such as would be used to aid harvesting or dewatering. Depending on doses, starch-based coagulant decreased yield by 10-14% and aluminum chlorohydrate decreased it by 14-26%. The lowest yield was 0.28 L CH 4/g volatile solids introduced to the digesters. 9. Algae harvested from raceways operated on recycled water had methane yields 13% higher than algae from raceways operated on both recycled water and nutrients provided by algae digestate. The slightly lower yield was expected due to the presence of previously digested biomass from the digestate fertilizer. 10. Defined media was replenished with nutrients and recycled repeatedly in sequential batch growth of Chlorella sorokiniana (DOE 1412). This laboratory study tested for inhibition and accumulation of inhibiting compounds (allelopathic or auto-inhibitory substances), information that would help estimate the blowdown ratio needed for an integrated system. In laboratory experiments in which water was recycled a total of five times, each successive round of reuse resulted in an average 4±3% reduction in log-phase specific growth rates. However, linear-phase growth inhibition was only observed in the final fifth round of reuse. 11. No decline in productivity was detected after 15 rounds of water recycling with nutrients provided by whole digestate in lab cultivation. Lab tests allowed for steady light and temperature, increasing the ability to detect inhibition. 12. In initial pilot inhibition studies, wastewater growth media was reused once while productivity was monitored. Media reuse was accomplished with triplicate sets of 33-m2 raceways operated in series. First-round gross productivity (based on effluent biomass flow) averaged 23 g/m 2-day annually while second-round gross productivity averaged 19 g/m 2-day annually. In terms of net productivity (based on raceway effluent biomass minus influent biomass), the first-round productivity averaged 15 g/m2-d and second round averaged 13 g/m 2-d during June-September operation. The higher productivity in the first-round ponds was likely due to heterotrophic/mixotrophic growth on the wastewater organic matter. 13. In a culminating pilot experiment, coagulant was used to decrease the carry-over of unsettled algae into subsequent rounds of growth. Over nearly 8 months, 93% of the media (the equivalent of 14 rounds of water reuse) was recycled without significant productivity loss compared to controls. Ponds receiving both recycled water and nutrients had net productivities of 14-24 g/m 2-d during fall and mid-summer, respectively. 14. Techno-economic analysis of the proposed facility found minimum fuel selling price to range from $7.01/gallon gasoline equivalent without revenue other than fuel to $3.85/GGE with revenue from wastewater treatment fees and LCFS and RIN (Low Carbon Fuel Standard and Renewable Identification Numbers) credits. 15. Life cycle assessment indicated GHG emissions of 40.7 g CO 2/MJ fuel and a net energy ratio (energy required/energy produced) of 0.37.« less
NASA Astrophysics Data System (ADS)
Smith, J. P.; Owens, P. N.; Gaspar, L.; Lobb, D. A.; Petticrew, E. L.
2015-12-01
An understanding of sediment redistribution processes and the main sediment sources within a watershed is needed to support watershed management strategies. The fingerprinting technique is increasingly being recognized as a method for establishing the source of the sediment transported within watersheds. However, the different behaviour of the various fingerprinting properties has been recognized as a major limitation of the technique, and the uncertainty associated with tracer selection needs to be addressed. There are also questions associated with which modelling approach (frequentist or Bayesian) is the best to unmix complex environmental mixtures, such as river sediment. This study aims to compare and evaluate the differences between fingerprinting predictions provided by a Bayesian unmixing model (MixSIAR) using different groups of tracer properties for use in sediment source identification. We used fallout radionuclides (e.g. 137Cs) and geochemical elements (e.g. As) as conventional fingerprinting properties, and colour parameters as emerging properties; both alone and in combination. These fingerprinting properties are being used (i.e. Koiter et al., 2013; Barthod et al., 2015) to determine the proportional contributions of fine sediment in the South Tobacco Creek Watershed, an agricultural watershed located in Manitoba, Canada. We show that the unmixing model using a combination of fallout radionuclides and geochemical tracers gave similar results to the model based on colour parameters. Furthermore, we show that a model that combines all tracers (i.e. radionuclide/geochemical and colour) gave similar results, showing that sediment sources change from predominantly topsoil in the upper reaches of the watershed to channel bank and bedrock outcrop material in the lower reaches. Barthod LRM et al. (2015). Selecting color-based tracers and classifying sediment sources in the assessment of sediment dynamics using sediment source fingerprinting. J Environ Qual. Doi:10.2134/jeq2015.01.0043 Koiter AJ et al. (2013). Investigating the role of connectivity and scale in assessing the sources of sediment in an agricultural watershed in the Canadian prairies using sediment source fingerprinting. J Soils Sediments, 13, 1676-1691.
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
Retrieving the hydrous minerals on Mars by sparse unmixing and the Hapke model using MRO/CRISM data
NASA Astrophysics Data System (ADS)
Lin, Honglei; Zhang, Xia
2017-05-01
The hydrous minerals on Mars preserve records of potential past aqueous activity. Quantitative information regarding mineralogical composition would enable a better understanding of the formation processes of these hydrous minerals, and provide unique insights into ancient habitable environments and the geological evolution of Mars. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) has the advantage of both a high spatial and spectral resolution, which makes it suitable for the quantitative analysis of minerals on Mars. However, few studies have attempted to quantitatively retrieve the mineralogical composition of hydrous minerals on Mars using visible-infrared (VISIR) hyperspectral data due to their distribution characteristics (relatively low concentrations, located primarily in Noachian terrain, and unclear or unknown background minerals) and limitations of the spectral unmixing algorithms. In this study, we developed a modified sparse unmixing (MSU) method, combining the Hapke model with sparse unmixing. The MSU method considers the nonlinear mixed effects of minerals and avoids the difficulty of determining the spectra and number of endmembers from the image. The proposed method was tested successfully using laboratory mixture spectra and an Airborne Visible Infrared Imaging Spectrometer (AVIRIS) image of the Cuprite site (Nevada, USA). Then it was applied to CRISM hyperspectral images over Gale crater. Areas of hydrous mineral distribution were first identified by spectral features of water and hydroxyl absorption. The MSU method was performed on these areas, and the abundances were retrieved. The results indicated that the hydrous minerals consisted mostly of hydrous silicates, with abundances of up to 35%, as well as hydrous sulfates, with abundances ≤10%. Several main subclasses of hydrous minerals (e.g., Fe/Mg phyllosilicate, prehnite, and kieserite) were retrieved. Among these, Fe/Mg- phyllosilicate was the most abundant, with abundances ranging up to almost 30%, followed by prehnite and kieserite, with abundances lower than 15%. Our results are consistent with related research and in situ analyses of data from the rover Curiosity; thus, our method has the potential to be widely used for quantitative mineralogical mapping at the global scale of the surface of Mars.
NASA Astrophysics Data System (ADS)
Griesbach, J.; Westphal, J. J.; Roscoe, C.; Hawes, D. R.; Carrico, J. P.
2013-09-01
The Proximity Operations Nano-Satellite Flight Demonstration (PONSFD) program is to demonstrate rendezvous proximity operations (RPO), formation flying, and docking with a pair of 3U CubeSats. The program is sponsored by NASA Ames via the Office of the Chief Technologist (OCT) in support of its Small Spacecraft Technology Program (SSTP). The goal of the mission is to demonstrate complex RPO and docking operations with a pair of low-cost 3U CubeSat satellites using passive navigation sensors. The program encompasses the entire system evolution including system design, acquisition, satellite construction, launch, mission operations, and final disposal. The satellite is scheduled for launch in Fall 2015 with a 1-year mission lifetime. This paper provides a brief mission overview but will then focus on the current design and driving trade study results for the RPO mission specific processor and relevant ground software. The current design involves multiple on-board processors, each specifically tasked with providing mission critical capabilities. These capabilities range from attitude determination and control to image processing. The RPO system processor is responsible for absolute and relative navigation, maneuver planning, attitude commanding, and abort monitoring for mission safety. A low power processor running a Linux operating system has been selected for implementation. Navigation is one of the RPO processor's key tasks. This entails processing data obtained from the on-board GPS unit as well as the on-board imaging sensors. To do this, Kalman filters will be hosted on the processor to ingest and process measurements for maintenance of position and velocity estimates with associated uncertainties. While each satellite carries a GPS unit, it will be used sparsely to conserve power. As such, absolute navigation will mainly consist of propagating past known states, and relative navigation will be considered to be of greater importance. For relative observations, each spacecraft hosts 3 electro-optical sensors dedicated to imaging the companion satellite. The image processor will analyze the images to obtain estimates for range, bearing, and pose, with associated rates and uncertainties. These observations will be fed to the RPO processor's relative Kalman filter to perform relative navigation updates. This paper includes estimates for expected navigation accuracies for both absolute and relative position and velocity. Another key task for the RPO processor is maneuver planning. This includes automation to plan maneuvers to achieve a desired formation configuration or trajectory (including docking), as well as automation to safely react to potentially dangerous situations. This will allow each spacecraft to autonomously plan fuel-efficient maneuvers to achieve a desired trajectory as well as compute adjustment maneuvers to correct for thrusting errors. This paper discusses results from a trade study that has been conducted to examine maneuver targeting algorithms required on-board the spacecraft. Ground software will also work in conjunction with the on-board software to validate and approve maneuvers as necessary.
Fuel Cell/Reformers Technology Development
NASA Technical Reports Server (NTRS)
2004-01-01
NASA Glenn Research Center is interested in developing Solid Oxide Fuel Cell for use in aerospace applications. Solid oxide fuel cell requires hydrogen rich feed stream by converting commercial aviation jet fuel in a fuel processing process. The grantee's primary research activities center on designing and constructing a test facility for evaluating injector concepts to provide optimum feeds to fuel processor; collecting and analyzing literature information on fuel processing and desulfurization technologies; establishing industry and academic contacts in related areas; providing technical support to in-house SOFC-based system studies. Fuel processing is a chemical reaction process that requires efficient delivery of reactants to reactor beds for optimum performance, i.e., high conversion efficiency and maximum hydrogen production, and reliable continuous operation. Feed delivery and vaporization quality can be improved by applying NASA's expertise in combustor injector design. A 10 KWe injector rig has been designed, procured, and constructed to provide a tool to employ laser diagnostic capability to evaluate various injector concepts for fuel processing reactor feed delivery application. This injector rig facility is now undergoing mechanical and system check-out with an anticipated actual operation in July 2004. Multiple injector concepts including impinging jet, venturi mixing, discrete jet, will be tested and evaluated with actual fuel mixture compatible with reforming catalyst requirement. Research activities from September 2002 to the closing of this collaborative agreement have been in the following areas: compiling literature information on jet fuel reforming; conducting autothermal reforming catalyst screening; establishing contacts with other government agencies for collaborative research in jet fuel reforming and desulfurization; providing process design basis for the build-up of injector rig facility and individual injector design.
Unmixing techniques for better segmentation of urban zones, roads, and open pit mines
NASA Astrophysics Data System (ADS)
Nikolov, Hristo; Borisova, Denitsa; Petkov, Doyno
2010-10-01
In this paper the linear unmixing method has been applied in classification of manmade objects, namely urbanized zones, roads etc. The idea is to exploit to larger extent the possibilities offered by multispectral imagers having mid spatial resolution in this case TM/ETM+ instruments. In this research unmixing is used to find consistent regression dependencies between multispectral data and those gathered in-situ and airborne-based sensors. The correct identification of the mixed pixels is key element for the subsequent segmentation forming the shape of the artificial feature is determined much more reliable. This especially holds true for objects with relatively narrow structure for example two-lane roads for which the spatial resolution is larger that the object itself. We have combined ground spectrometry of asphalt, Landsat images of RoI, and in-situ measured asphalt in order to determine the narrow roads. The reflectance of paving stones made from granite is highest compared to another ones which is true for open and stone pits. The potential for mapping is not limited to the mid-spatial Landsat data, but also may be used if the data has higher spatial resolution (as fine as 0.5 m). In this research the spectral and directional reflection properties of asphalt and concrete surfaces compared to those of paving stone made from different rocks have been measured. The in-situ measurements, which plays key role have been obtained using the Thematically Oriented Multichannel Spectrometer (TOMS) - designed in STIL-BAS.
Sediment unmixing using detrital geochronology
Sharman, Glenn R.; Johnstone, Samuel
2017-01-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the influence of environmental forcings (e.g., tectonism, climate) on the earth’s surface. Here we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First we summarize ‘top-down’ mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions (‘parents’) that characterize a derived sample or set of samples (‘daughters’). Second we propose the use of ‘bottom-up’ methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable mixtures over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
Sediment unmixing using detrital geochronology
NASA Astrophysics Data System (ADS)
Sharman, Glenn R.; Johnstone, Samuel A.
2017-11-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
(LMRG): Microscope Resolution, Objective Quality, Spectral Accuracy and Spectral Un-mixing
Bayles, Carol J.; Cole, Richard W.; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Martin, Karen; McNamara, George; Opansky, Cynthia; Schulz, Katherine; Thibault, Marc; Brown, Claire M.
2012-01-01
The second study by the LMRG focuses on measuring confocal laser scanning microscope (CLSM) resolution, objective lens quality, spectral imaging accuracy and spectral un-mixing. Affordable test samples for each aspect of the study were designed, prepared and sent to 116 labs from 23 countries across the globe. Detailed protocols were designed for the three tests and customized for most of the major confocal instruments being used by the study participants. One protocol developed for measuring resolution and objective quality was recently published in Nature Protocols (Cole, R. W., T. Jinadasa, et al. (2011). Nature Protocols 6(12): 1929–1941). The first study involved 3D imaging of sub-resolution fluorescent microspheres to determine the microscope point spread function. Results of the resolution studies as well as point spread function quality (i.e. objective lens quality) from 140 different objective lenses will be presented. The second study of spectral accuracy looked at the reflection of the laser excitation lines into the spectral detection in order to determine the accuracy of these systems to report back the accurate laser emission wavelengths. Results will be presented from 42 different spectral confocal systems. Finally, samples with double orange beads (orange core and orange coating) were imaged spectrally and the imaging software was used to un-mix fluorescence signals from the two orange dyes. Results from 26 different confocal systems will be summarized. Time will be left to discuss possibilities for the next LMRG study.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
Quantitative detection of settled coal dust over green canopy
NASA Astrophysics Data System (ADS)
Brook, Anna; Sahar, Nir
2017-04-01
The main task of environmental and geoscience applications are efficient and accurate quantitative classification of earth surfaces and spatial phenomena. In the past decade, there has been a significant interest in employing spectral unmixing in order to retrieve accurate quantitative information latent in in situ data. Recently, the ground-truth and laboratory measured spectral signatures promoted by advanced algorithms are proposed as a new path toward solving the unmixing problem in semi-supervised fashion. This study presents a practical implementation of field spectroscopy as a quantitative tool to detect settled coal dust over green canopy in free/open environment. Coal dust is a fine powdered form of coal, which is created by the crushing, grinding, and pulverizing of coal. Since the inelastic nature of coal, coal dust can be created during transportation, or by mechanically handling coal. Coal dust, categorized at silt-clay particle size, of particular concern due to heavy metals (lead, mercury, nickel, tin, cadmium, mercury, antimony, arsenic, isotopes of thorium and strontium) which are toxic also at low concentrations. This hazard exposes risk on both environment and public health. It has been identified by medical scientist around the world as causing a range of diseases and health problems, mainly heart and respiratory diseases like asthma and lung cancer. It is due to the fact that the fine invisible coal dust particles (less than 2.5 microns) long lodge in the lungs and are not naturally expelled, so long-term exposure increases the risk of health problems. Numerus studies reported that data to conduct study of geographic distribution of the very fine coal dust (smaller than PM 2.5) and related health impacts from coal exports, is not being collected. Sediment dust load in an indoor environment can be spectrally assessed using reflectance spectroscopy (Chudnovsky and Ben-Dor, 2009). Small amounts of particulate pollution that may carry a signature of a forthcoming environmental hazard are of key interest when considering the effects of pollution. According to the most basic distribution dynamics, dust consists of suspended particulate matter in a fine state of subdivision that are raised and carried by wind. In this context, it is increasingly important to first, understand the distribution dynamics of pollutants, and subsequently develop dedicated tools and measures to control and monitor pollutants in the free environment. The earliest effect of settled polluted dust particles is not always reflected through poor conditions of vegetation or soils, or any visible damages. In most of the cases, it has a quite long accumulation process that graduates from a polluted condition to long-term environmental and health related hazard. Although conducted experiments with pollutant analog powders under controlled conditions have tended to con- firm the findings from field studies (Brook, 2014; Brook and Ben-Dor 2016; Brook, 2016), a major criticism of all these experiments is their short duration. The resulting conclusion is that it is difficult, if not impossible, to determine the implications of long-term exposure to realistic concentrations of pollutants from such short-term studies. In general, the task of unmixing is to decompose the reflectance spectrum into a set of endmembers or principal combined spectra and their corresponding abundances (Bioucas-Dias et al., 2012). This study suggests that the sensitivity of sparse unmixing techniques provides an ideal approach to extract and identify coal dust settled over/upon green vegetation canopy using in situ spectral data collected by portable spectrometer. The optimal NMF algorithms, such as ALS and LPG, are assumed to be the simplest methods that achieve the minimum error. The suggested practical approach includes the following stages: 1. In situ spectral measurements, 2. Near-real-time spectral data analysis, 3. Estimated concentration of coal dust reported as mg/sq m. The stage 2 is completed by calculating: 1. Unmixing between the green canopy and the settle dust extraction only coal dust fraction, 2. Converting spectral feature of coal dust to concentration via PLSR spectral model. The spectral model was trained and validated PLSR model developed at laboratory using spectra across MIR (FTIR reflectance spectra) and NIR regions and XRD analysis. The obtained RMSE was satisfying for both spectral regions. Thus, it was concluded that field spectroscopy can be used for this purpose, and it can provide fully quantitative measures of settle coal dust. Nowadays this approach (both spectrometer and algorithm) has been accepted as a practical operational tool for environmental monitoring near power station Orot Rabin in Hadera and will be used by the Sharon-Carmel Districts Municipal Association for Environmental Protection, Israel as a regulatory tool. In summary, this work shows that coal dust can be assessed using in situ spectroscopy, making it a potentially powerful tool for environmental studies. References Chudnovsky, A., & Ben-Dor, E. (2009). Reflectance spectroscopy as a tool for settled dust monitoring in office environment. International Journal of Environment and Waste Management, 4(1), 32-49. Brook, A. (2014). Quantitative Detection of Settled dust over Green Canopy using Sparse Unmixing of Airborne Hyperspectral Data. IEEE-Whispers 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, 2014, Switzerland, 4-8. Brook, A. and Ben-Dor, E. (2016). Quantitative detection of settled dust over Green Canopy using sparse unmixing of airborne hyperspectral data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(2), pp.884-897. Brook, A. (2016). Quantitative Detection and Long-Term Monitoring of Settle Dust Using Semisupervised Learning for Spectral Data. Water, Air, & Soil Pollution, 227(3), pp.1-9. Bioucas-Dias, J.M., Plaza, A., Dobigeon, N., Parente, M., Du, Q., Gader, P. and Chanussot, J. (2012). Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(2), pp.354-379. Keshava, N., Mustard, J. (2002). Spectral unmixing. IEEE Signal Process. Mag., 19(1), 44-57. Bioucas-Dias et al. (2012). Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5(2), 354 -379.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel, G.; Rudisill, T.; Almond, P.
The Idaho National Laboratory (INL) is actively engaged in the development of electrochemical processing technology for the treatment of fast reactor fuels using irradiated fuel from the Experimental Breeder Reactor-II (EBR-II) as the primary test material. The research and development (R&D) activities generate a low enriched uranium (LEU) metal product from the electrorefining of the EBR-II fuel and the subsequent consolidation and removal of chloride salts by the cathode processor. The LEU metal ingots from past R&D activities are currently stored at INL awaiting disposition. One potential disposition pathway is the shipment of the ingots to the Savannah River Sitemore » (SRS) for dissolution in H-Canyon. Carbon steel cans containing the LEU metal would be loaded into reusable charging bundles in the H-Canyon Crane Maintenance Area and charged to the 6.4D or 6.1D dissolver. The LEU dissolution would be accomplished as the final charge in a dissolver batch (following the dissolution of multiple charges of spent nuclear fuel (SNF)). The solution would then be purified and the 235U enrichment downblended to allow use of the U in commercial reactor fuel. To support this potential disposition path, the Savannah River National Laboratory (SRNL) developed a dissolution flowsheet for the LEU using samples of the material received from INL.« less
A demonstration of pig lard as an industrial boiler fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.G.; Badger, M.; Larsen, J.
Hatfield Quality Meats is a family owned regional meat processor and vendor and has multiple facilities in Pennsylvania. The main plant and corporate offices are located in Hatfield, Pennsylvania where they process 7,000 hogs per day. Two of Hatfield's by-products are lard and choice white grease (CWG), both of which are produced in large quantities. The lard, which is stored warm and liquid, is sold by tanker truck to veal producers, by 55-gallon drums to commercial bakeries, in 5-gallon pails to a variety of restaurants, and periodically in 1-pound tins to grocery stores. The CWG, which is a rendered product,more » is also sold to veal producers. A decrease in sales could leave the company with large excess of these products and difficult disposal problems. Hatfield Quality Meats, Lehigh University, and Penn State's the Energy Institute evaluated the liquid lard as an industrial boiler fuel and obtained the necessary handleability and combustion data to allow for its use as a supplemental fuel in Hatfield's process, were burned in Penn State's research boiler. The boiler, which has a nominal firing rate of two million Btu/h, is a 150 psig working pressure, A-frame watertube boiler. In addition to the lard samples, No.6 fuel oil was fired for baseline comparison. This paper discusses the comparison of lard and No.6 fuel oil as boiler fuels. Issues discussed include fuel characterization, material handling, combustion performance, flame character and stability, and emissions.« less
Scalar entrainment in the mixing layer
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Mungal, M. G.; Broadwell, J. E.; Reynolds, W. C.
1988-01-01
New definitions of entrainment and mixing based on the passive scalar field in the plane mixing layer are proposed. The definitions distinguish clearly between three fluid states: (1) unmixed fluid, (2) fluid engulfed in the mixing layer, trapped between two scalar contours, and (3) mixed fluid. The difference betwen (2) and (3) is the amount of fluid which has been engulfed during the pairing process, but has not yet mixed. Trends are identified from direct numerical simulations and extensions to high Reynolds number mixing layers are made in terms of the Broadwell-Breidenthal mixing model. In the limit of high Peclet number (Pe = ReSc) it is speculated that engulfed fluid rises in steps associated with pairings, introducing unmixed fluid into the large scale structures, where it is eventually mixed at the Kolmogorov scale. From this viewpoint, pairing is a prerequisite for mixing in the turbulent plane mixing layer.
NASA Technical Reports Server (NTRS)
Szatkowski, G. P.
1983-01-01
A computer simulation system has been developed for the Space Shuttle's advanced Centaur liquid fuel booster rocket, in order to conduct systems safety verification and flight operations training. This simulation utility is designed to analyze functional system behavior by integrating control avionics with mechanical and fluid elements, and is able to emulate any system operation, from simple relay logic to complex VLSI components, with wire-by-wire detail. A novel graphics data entry system offers a pseudo-wire wrap data base that can be easily updated. Visual subsystem operations can be selected and displayed in color on a six-monitor graphics processor. System timing and fault verification analyses are conducted by injecting component fault modes and min/max timing delays, and then observing system operation through a red line monitor.
Continental Spatio-Temporal Data Analysis with Linear Spectral Mixture Model Using FOSS
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna; Ganguly, Sangram; Milesi, Cristina; Raja, Kumar; Wang, Weile; Votava, Petr; Michaelis, Andrew
2015-01-01
This work demonstrates the development and implementation of a Fully Constrained Least Squares (FCLS) unmixing model developed in C++ programming language with OpenCV package and boost C++ libraries in the NASA Earth Exchange (NEX). Visualization of the results is supported by GRASS GIS and statistical analysis is carried in R in a Linux system environment. FCLS was first tested on computer simulated data with Gaussian noise of various signal-to-noise ratio, and Landsat data of an agricultural scenario and an urban environment using a set of global end members of substrate (soils, sediments, rocks, and non-photosynthetic vegetation), vegetation that includes green photosynthetic plants and dark objects which encompasses absorptive substrate materials, clear water, deep shadows, etc. For the agricultural scenario, a spectrally diverse collection of 11 scenes of Level 1 terrain corrected, cloud free Landsat-5 TM data of Fresno, California, USA were unmixed and the results were validated with the corresponding ground data. To study an urbanized landscape, a clear sky Landsat-5 TM data were unmixed and validated with coincident World View-2 abundance maps (of 2 m spatial resolution) for an area of San Francisco, California, USA. The results were evaluated using descriptive statistics, correlation coefficient, RMSE, probability of success, boxplot and bivariate distribution function. Finally, FCLS was used for sub-pixel land cover analysis of the monthly WELD (Wen-enabled Landsat data) repository from 2008 to 2011 of North America. The abundance maps in conjunction with DMSP-OLS nighttime lights data were used to extract the urban land cover features and analyze their spatial-temporal growth.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of unmixed young groundwater (defined using chlorofluorocarbon-11 concentrations and tritium activities) in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps were developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Quantifying the Components of Impervious Surfaces
Tilley, Janet S.; Slonecker, E. Terrence
2006-01-01
This study's objectives were to (1) determine the relative contribution of impervious surface individual components by collecting digital information from high-resolution imagery, 1-meter or better; and to (2) determine which of the more advanced techniques, such as spectral unmixing or the application of coefficients to land use or land cover data, was the most suitable method that could be used by State and local governments as well as Federal agencies to efficiently measure the imperviousness in any given watershed or area of interest. The components of impervious surfaces, combined from all the watersheds and time periods from objective one were the following: buildings 29.2-percent, roads 28.3-percent, parking lots 24.6-percent; with the remaining three totaling 14-percent - driveways, sidewalks, and other, where other were any other features that were not contained within the first five. Results from objective two were spectral unmixing techniques will ultimately be the most efficient method of determining imperviousness, but are not yet accurate enough as it is critical to achieve accuracy better than 10-percent of the truth, of which the method is not consistently accomplishing as observed in this study. Of the three techniques in coefficient application tested, land use coefficient application was not practical, while if the last two methods, coefficients applied to land cover data, were merged, their end results could be to within 5-percent or better, of the truth. Until the spectral unmixing technique has been further refined, land cover coefficients should be used, which offer quick results, but not current as they were developed for the 1992 National Land Characteristics Data.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
Continental Spatio-temporal Data Analysis with Linear Spectral Mixture Model using FOSS
NASA Astrophysics Data System (ADS)
Kumar, U.; Nemani, R. R.; Ganguly, S.; Milesi, C.; Raja, K. S.; Wang, W.; Votava, P.; Michaelis, A.
2015-12-01
This work demonstrates the development and implementation of a Fully Constrained Least Squares (FCLS) unmixing model developed in C++ programming language with OpenCV package and boost C++ libraries in the NASA Earth Exchange (NEX). Visualization of the results is supported by GRASS GIS and statistical analysis is carried in R in a Linux system environment. FCLS was first tested on computer simulated data with Gaussian noise of various signal-to-noise ratio, and Landsat data of an agricultural scenario and an urban environment using a set of global endmembers of substrate (soils, sediments, rocks, and non-photosynthetic vegetation), vegetation that includes green photosynthetic plants and dark objects which encompasses absorptive substrate materials, clear water, deep shadows, etc. For the agricultural scenario, a spectrally diverse collection of 11 scenes of Level 1 terrain corrected, cloud free Landsat-5 TM data of Fresno, California, USA were unmixed and the results were validated with the corresponding ground data. To study an urbanized landscape, a clear sky Landsat-5 TM data were unmixed and validated with coincident World View-2 abundance maps (of 2 m spatial resolution) for an area of San Francisco, California, USA. The results were evaluated using descriptive statistics, correlation coefficient, RMSE, probability of success, boxplot and bivariate distribution function. Finally, FCLS was used for sub-pixel land cover analysis of the monthly WELD (Wen-enabled Landsat data) repository from 2008 to 2011 of North America. The abundance maps in conjunction with DMSP-OLS nighttime lights data were used to extract the urban land cover features and analyze their spatial-temporal growth.
NASA Technical Reports Server (NTRS)
Tomsik, Thomas M.; Yen, Judy C.H.; Budge, John R.
2006-01-01
Solid oxide fuel cell systems used in the aerospace or commercial aviation environment require a compact, light-weight and highly durable catalytic fuel processor. The fuel processing method considered here is an autothermal reforming (ATR) step. The ATR converts Jet-A fuel by a reaction with steam and air forming hydrogen (H2) and carbon monoxide (CO) to be used for production of electrical power in the fuel cell. This paper addresses the first phase of an experimental catalyst screening study, looking at the relative effectiveness of several monolith catalyst types when operating with untreated Jet-A fuel. Six monolith catalyst materials were selected for preliminary evaluation and experimental bench-scale screening in a small 0.05 kWe micro-reactor test apparatus. These tests were conducted to assess relative catalyst performance under atmospheric pressure ATR conditions and processing Jet-A fuel at a steam-to-carbon ratio of 3.5, a value higher than anticipated to be run in an optimized system. The average reformer efficiencies for the six catalysts tested ranged from 75 to 83 percent at a constant gas-hourly space velocity of 12,000 hr 1. The corresponding hydrocarbon conversion efficiency varied from 86 to 95 percent during experiments run at reaction temperatures between 750 to 830 C. Based on the results of the short-duration 100 hr tests reported herein, two of the highest performing catalysts were selected for further evaluation in a follow-on 1000 hr life durability study in Phase II.
NASA Technical Reports Server (NTRS)
Ramsey, Michael S.; Christensen, Philip R.
1992-01-01
Accurate interpretation of thermal infrared data depends upon the understanding and removal of complicating effects. These effects may include physical mixing of various mineralogies and particle sizes, atmospheric absorption and emission, surficial coatings, geometry effects, and differential surface temperatures. The focus is the examination of the linear spectral mixing of individual mineral or endmember spectra. Linear addition of spectra, for particles larger than the wavelength, allows for a straight-forward method of deconvolving the observed spectra, predicting a volume percent of each endmember. The 'forward analysis' of linear mixing (comparing the spectra of physical mixtures to numerical mixtures) has received much attention. The reverse approach of un-mixing thermal emission spectra was examined with remotely sensed data, but no laboratory verification exists. Understanding of the effects of spectral mixing on high resolution laboratory spectra allows for the extrapolation to lower resolution, and often more complicated, remotely gathered data. Thermal Infrared Multispectral Scanner (TIMS) data for Meteor Crater, Arizona were acquired in Sep. 1987. The spectral un-mixing of these data gives a unique test of the laboratory results. Meteor Crater (1.2 km in diameter and 180 m deep) is located in north-central Arizona, west of Canyon Diablo. The arid environment, paucity of vegetation, and low relief make the region ideal for remote data acquisition. Within the horizontal sedimentary sequence that forms the upper Colorado Plateau, the oldest unit sampled by the impact crater was the Permian Coconino Sandstone. A thin bed of the Toroweap Formation, also of Permian age, conformably overlays the Coconino. Above the Toroweap lies the Permian Kiabab Limestone which, in turn, is covered by a thin veneer of the Moenkopi Formation. The Moenkopi is Triassic in age and has two distinct sub-units in the vicinity of the crater. The lower Wupatki member is a fine-grained sandstone, while the upper Moqui member is a fissile siltstone. Ejecta from these units are preserved as inverted stratigraphy up to 2 crater radii from the rim. The mineralogical contrast between the units, relative lack of post-emplacement erosion and ejecta mixing provide a unique site to apply the un-mixing model. Selection of the aforementioned units as endmembers reveals distinct patterns in the ejecta of the crater.
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
SOURCE APPORTIONMENT RESULTS, UNCERTAINTIES, AND MODELING TOOLS
Advanced multivariate receptor modeling tools are available from the U.S. Environmental Protection Agency (EPA) that use only speciated sample data to identify and quantify sources of air pollution. EPA has developed both EPA Unmix and EPA Positive Matrix Factorization (PMF) and ...
46 CFR 164.006-5 - Procedure for approval.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the deck covering. (2) The range of thicknesses in which it is proposed to lay the deck covering... (c). (2) Sufficient bulk material (unmixed) to lay a sample one inch thick on an area of 12″×27″. If...
Multi-Fluid Interpenetration Mixing in X-ray and Directly Laser driven ICF Capsule Implosions
NASA Astrophysics Data System (ADS)
Wilson, Douglas
2003-10-01
Mix between a surrounding shell and the fuel leads to degradation in ICF capsule performance. Both indirectly (X-ray) and directly laser driven implosions provide a wealth of data to test mix models. One model, the multi-fluid interpenetration mix model of Scannapieco and Cheng (Phys. Lett. A., 299, 49, 2002), was implemented in an ICF code and applied to a wide variety of experiments (e.g. J. D. Kilkenny et al., Proc. Conf Plasm. Phys. Contr. Nuc. Fus. Res. 3, 29(1988), P. Amendt, R. E. Turner, O. L. Landen, Phy. Rev. Lett., 89, 165001 (2002), or Li et al., Phy. Rev. Lett, 89, 165002 (2002)). With its single adjustable parameter fixed, it replicates well the yield degradation with increasing convergence ratio for both directly and indirectly driven capsules. Often, but not always the ion temperatures with mixing are calculated to be higher than in an unmixed implosion, agreeing with observations. Comparison with measured directly driven implosion yield rates ( from the neutron temporal diagnostic or NTD) shows mixing increases rapidly during the burn. The model also reproduces the decrease of the fuel "rho-r" with fill gas pressure, measured by observing escaping deuterons or secondary neutrons. The mix model assumes fully atomically mixed constituents, but when experiments with deuterated plastic layers and 3He fuel are modeled, less that full atomic mix is appropriate. Applying the mix model to the ablator - solid DT interface in indirectly driven ignition capsules for the NIF or LMJ suggests that the capsules will ignite, but that burn after ignition may be somewhat degraded. Situations in which the Scannapieco and Cheng model fails to agree with experiments can guide us to improvements or the development of other models. Some directly driven symmetric implosions suggest that in highly mixed situations, a higher value of the mix parameter may needed. Others show the model underestimating the fuel burn temperature. This work was performed by the Los Alamos National Laboratory under DOE contract number W-7405-Eng-36.
Enhancing charge transfer kinetics by nanoscale catalytic cermet interlayer.
An, Jihwan; Kim, Young-Beom; Gür, Turgut M; Prinz, Fritz B
2012-12-01
Enhancing the density of catalytic sites is crucial for improving the performance of energy conversion devices. This work demonstrates the kinetic role of 2 nm thin YSZ/Pt cermet layers on enhancing the oxygen reduction kinetics for low temperature solid oxide fuel cells. Cermet layers were deposited between the porous Pt cathode and the dense YSZ electrolyte wafer using atomic layer deposition (ALD). Not only the catalytic role of the cermet layer itself but the mixing effect in the cermet was explored. For cells with unmixed and fully mixed cermet interlayers, the maximum power density was enhanced by a factor of 1.5 and 1.8 at 400 °C, and by 2.3 and 2.7 at 450 °C, respectively, when compared to control cells with no cermet interlayer. The observed enhancement in cell performance is believed to be due to the increased triple phase boundary (TPB) density in the cermet interlayer. We also believe that the sustained kinetics for the fully mixed cermet layer sample stems from better thermal stability of Pt islands separated by the ALD YSZ matrix, which helped to maintain the high-density TPBs even at elevated temperature.
NASA Astrophysics Data System (ADS)
Lee, Kwangho; Han, Gwangwoo; Cho, Sungbaek; Bae, Joongmyeon
2018-03-01
A novel concept for diesel fuel processing utilizing H2O2 is suggested to obtain the high-purity H2 required for air-independent propulsion using polymer electrolyte membrane fuel cells for use in submarines and unmanned underwater vehicles. The core components include 1) a diesel-H2O2 autothermal reforming (ATR) reactor to produce H2-rich gas, 2) a water-gas shift (WGS) reactor to convert CO to H2, and 3) a H2 separation membrane to separate only high-purity H2. Diesel and H2O2 can easily be pressurized as they are liquids. The application of the H2 separation membrane without a compressor in the middle of the process is thus advantageous. In this paper, the characteristics of pressurized ATR and WGS reactions are investigated according to the operating conditions. In both reactors, the methanation reaction is enhanced as the pressure increases. Then, permeation experiments with a H2 separation membrane are performed while varying the temperature, pressure difference, and inlet gas composition. In particular, approximately 90% of the H2 is recovered when the steam-separated rear gas of the WGS reactor is used in the H2 separation membrane. Finally, based on the experimental results, design points are suggested for maximizing the efficiency of the diesel-H2O2 fuel processor.
High pressure autothermal reforming in low oxygen environments
NASA Astrophysics Data System (ADS)
Reese, Mark A.; Turn, Scott Q.; Cui, Hong
Recent interest in fuel cells has led to the conceptual design of an ocean floor, fuel cell-based, power generating station fueled by methane from natural gas seeps or from the controlled decomposition of methane hydrates. Because the dissolved oxygen concentration in deep ocean water is too low to provide adequate supplies to a fuel processor and fuel cell, oxygen must be stored onboard the generating station. A lab scale catalytic autothermal reformer capable of operating at pressures of 6-50 bar was constructed and tested. The objective of the experimental program was to maximize H 2 production per mole of O 2 supplied (H 2(out)/O 2(in)). Optimization, using oxygen-to-carbon (O 2/C) and water-to-carbon (S/C) ratios as independent variables, was conducted at three pressures using bottled O 2. Surface response methodology was employed using a 2 2 factorial design. Optimal points were validated using H 2O 2 as both a stored oxidizer and steam source. The optimal experimental conditions for maximizing the moles of H 2(out)/O 2(in) occurred at a S/C ratio of 3.00-3.35 and an O 2/C ratio of 0.44-0.48. When using H 2O 2 as the oxidizer, the moles of H 2(out)/O 2(in) increased ≤14%. An equilibrium model was also used to compare experimental and theoretical results.
Monitoring intracellular oxidative events using dynamic spectral unmixing microscopy
There is increasing interest in using live-cell imaging to monitor not just individual intracellular endpoints, but to investigate the interplay between multiple molecular events as they unfold in real time within the cell. A major impediment to simultaneous acquisition of multip...
NASA Astrophysics Data System (ADS)
Wamser, Kyle
Hyperspectral imagery and the corresponding ability to conduct analysis below the pixel level have tremendous potential to aid in landcover monitoring. During large ecosystem restoration projects, being able to monitor specific aspects of the recovery over large and often inaccessible areas under constrained finances are major challenges. The Civil Air Patrol's Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) can provide hyperspectral data in most parts of the United States at relatively low cost. Although designed specifically for use in locating downed aircraft, the imagery holds the potential to identify specific aspects of landcover at far greater fidelity than traditional multispectral means. The goals of this research were to improve the use of ARCHER hyperspectral imagery to classify sub-canopy and open-area vegetation in coniferous forests located in the Southern Rockies and to determine how much fidelity might be lost from a baseline of 1 meter spatial resolution resampled to 2 and 5 meter pixel size to simulate higher altitude collection. Based on analysis comparing linear spectral unmixing with a traditional supervised classification, the linear spectral unmixing proved to be statistically superior. More importantly, however, linear spectral unmixing provided additional sub-pixel information that was unavailable using other techniques. The second goal of determining fidelity loss based on spatial resolution was more difficult to determine due to how the data are represented. Furthermore, the 2 and 5 meter imagery were obtained by resampling the 1 meter imagery and therefore may not be representative of the quality of actual 2 or 5 meter imagery. Ultimately, the information derived from this research may be useful in better utilizing hyperspectral imagery to conduct forest monitoring and assessment.
Andrews, John T.; Eberl, D.D.
2012-01-01
Along the margins of areas such as Greenland and Baffin Bay, sediment composition reflects a complex mixture of sources associated with the transport of sediment in sea ice, icebergs, melt-water and turbidite plumes. Similar situations arise in many contexts associated with sediment transport and with the mixing of sediments from different source areas. The question is: can contributions from discrete sediment (bedrock) sources be distinguished in a mixed sediment by using mineralogy, and, if so, how accurately? To solve this problem, four end-member source sediments were mixed in various proportions to form eleven artificial mixtures. Two of the end-member sediments are felsic, and the other two have more mafic compositions. End member and mixed sediment mineralogies were measured for the < 2. mm sediment fractions by quantitative X-ray diffraction (qXRD). The proportions of source sediments in the mixtures then were calculated using an Excel macro program named SedUnMix, and the results were evaluated to determine the robustness of the algorithm. The program permits the unmixing of up to six end members, each of which can be represented by up to 5 alternative compositions, so as to better simulate variability within each source region. The results indicate that we can track the relative percentages of the four end members in the mixtures. We recommend, prior to applying the technique to down-core or to other provenance problems, that a suite of known, artificial mixtures of sediments from probable source areas be prepared, scanned, analyzed for quantitative mineralogy, and then analyzed by SedUnMix to check the sensitivity of the method for each specific unmixing problem. ?? 2011 Elsevier B.V..
Unmixing Magnetic Hysteresis Loops
NASA Astrophysics Data System (ADS)
Heslop, D.; Roberts, A. P.
2012-04-01
Magnetic hysteresis loops provide important information in rock and environmental magnetic studies. Natural samples often contain an assemblage of magnetic particles composed of components with different origins. Each component potentially carries important environmental information. Hysteresis loops, however, provide information concerning the bulk magnetic assemblage, which makes it difficult to isolate the specific contributions from different sources. For complex mineral assemblages an unmixing strategy with which to separate hysteresis loops into their component parts is therefore essential. Previous methods to unmix hysteresis data have aimed at separating individual loops into their constituent parts using libraries of type-curves thought to correspond to specific mineral types. We demonstrate an alternative approach, which rather than decomposing a single loop into monomineralic contributions, examines a collection of loops to determine their constituent source materials. These source materials may themselves be mineral mixtures, but they provide a genetically meaningful decomposition of a magnetic assemblage in terms of the processes that controlled its formation. We show how an empirically derived hysteresis mixing space can be created, without resorting to type-curves, based on the co-variation within a collection of measured loops. Physically realistic end-members, which respect the expected behaviour and symmetries of hysteresis loops, can then be extracted from the mixing space. These end-members allow the measured loops to be described as a combination of invariant parts that are assumed to represent the different sources in the mixing model. Particular attention is paid to model selection and estimating the complexity of the mixing model, specifically, how many end-members should be included. We demonstrate application of this approach using lake sediments from Butte Valley, northern California. Our method successfully separates the hysteresis loops into sources with a variety of terrigenous and authigenic origins.
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
NASA Astrophysics Data System (ADS)
Salvatore, M. R.; Goudge, T. A.; Bramble, M. S.; Edwards, C. S.; Bandfield, J. L.; Amador, E. S.; Mustard, J. F.; Christensen, P. R.
2018-02-01
We investigated the area to the northwest of the Isidis impact basin (hereby referred to as "NW Isidis") using thermal infrared emission datasets to characterize and quantify bulk surface mineralogy throughout this region. This area is home to Jezero crater and the watershed associated with its two deltaic deposits in addition to NE Syrtis and the strong and diverse visible/near-infrared spectral signatures observed in well-exposed stratigraphic sections. The spectral signatures throughout this region show a diversity of primary and secondary surface mineralogies, including olivine, pyroxene, smectite clays, sulfates, and carbonates. While previous thermal infrared investigations have sought to characterize individual mineral groups within this region, none have systematically assessed bulk surface mineralogy and related these observations to visible/near-infrared studies. We utilize an iterative spectral unmixing method to statistically evaluate our linear thermal infrared spectral unmixing models to derive surface mineralogy. All relevant primary and secondary phases identified in visible/near-infrared studies are included in the unmixing models and their modeled spectral contributions are discussed in detail. While the stratigraphy and compositional diversity observed in visible/near-infrared spectra are much better exposed and more diverse than most other regions of Mars, our thermal infrared analyses suggest the dominance of basaltic compositions with less observed variability in the amount and diversity of alteration phases. These results help to constrain the mineralogical context of these previously reported visible/near-infrared spectral identifications. The results are also discussed in the context of future in situ investigations, as the NW Isidis region has long been promoted as a region of paleoenvironmental interest on Mars.
Hyperspectral fluorescence imaging with multi wavelength LED excitation
NASA Astrophysics Data System (ADS)
Luthman, A. Siri; Dumitru, Sebastian; Quirós-Gonzalez, Isabel; Bohndiek, Sarah E.
2016-04-01
Hyperspectral imaging (HSI) can combine morphological and molecular information, yielding potential for real-time and high throughput multiplexed fluorescent contrast agent imaging. Multiplexed readout from targets, such as cell surface receptors overexpressed in cancer cells, could improve both sensitivity and specificity of tumor identification. There remains, however, a need for compact and cost effective implementations of the technology. We have implemented a low-cost wide-field multiplexed fluorescence imaging system, which combines LED excitation at 590, 655 and 740 nm with a compact commercial solid state HSI system operating in the range 600 - 1000 nm. A key challenge for using reflectance-based HSI is the separation of contrast agent fluorescence from the reflectance of the excitation light. Here, we illustrate how it is possible to address this challenge in software, using two offline reflectance removal methods, prior to least-squares spectral unmixing. We made a quantitative comparison of the methods using data acquired from dilutions of contrast agents prepared in well-plates. We then established the capability of our HSI system for non-invasive in vivo fluorescence imaging in small animals using the optimal reflectance removal method. The HSI presented here enables quantitative unmixing of at least four fluorescent contrast agents (Alexa Fluor 610, 647, 700 and 750) simultaneously in living mice. A successful unmixing of the four fluorescent contrast agents was possible both using the pure contrast agents and with mixtures. The system could in principle also be applied to imaging of ex vivo tissue or intraoperative imaging in a clinical setting. These data suggest a promising approach for developing clinical applications of HSI based on multiplexed fluorescence contrast agent imaging.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Definitions. For purposes of this subpart: (a) Biomass means any organic material not derived from fossil.... (c) Cogeneration facility means equipment used to produce electric energy and forms of useful thermal... all forms supplied from external sources; (k) Natural gas means either natural gas unmixed, or any...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Definitions. For purposes of this subpart: (a) Biomass means any organic material not derived from fossil.... (c) Cogeneration facility means equipment used to produce electric energy and forms of useful thermal... all forms supplied from external sources; (k) Natural gas means either natural gas unmixed, or any...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Definitions. For purposes of this subpart: (a) Biomass means any organic material not derived from fossil.... (c) Cogeneration facility means equipment used to produce electric energy and forms of useful thermal... all forms supplied from external sources; (k) Natural gas means either natural gas unmixed, or any...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Definitions. For purposes of this subpart: (a) Biomass means any organic material not derived from fossil.... (c) Cogeneration facility means equipment used to produce electric energy and forms of useful thermal... all forms supplied from external sources; (k) Natural gas means either natural gas unmixed, or any...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Definitions. For purposes of this subpart: (a) Biomass means any organic material not derived from fossil.... (c) Cogeneration facility means equipment used to produce electric energy and forms of useful thermal... all forms supplied from external sources; (k) Natural gas means either natural gas unmixed, or any...
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-03-15
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.
Band selection using forward feature selection algorithm for citrus Huanglongbing disease detection
USDA-ARS?s Scientific Manuscript database
This study attempted to classify spectrally similar data – obtained from aerial images of healthy citrus plants and the citrus greening disease (Huanglongbing) infected plants - using small differences without un-mixing the endmember components and therefore without the need for endmember library. H...
NASA Astrophysics Data System (ADS)
Varatharajan, I.; D'Amore, M.; Maturilli, A.; Helbert, J.; Hiesinger, H.
2018-04-01
Machine learning approach to spectral unmixing of emissivity spectra of Mercury is carried out using endmember spectral library measured at simulated daytime surface conditions of Mercury. Study supports MERTIS payload onboard ESA/JAXA BepiColombo.
NASA Astrophysics Data System (ADS)
Sourav Rout, Smruti; Wörner, Gerhard
2017-04-01
Time-scales extracted from the detailed analysis of chemically zoned minerals provide insights into crystal ages, magma storage and compositional evolution, including mixing and unmixing events. This allows having a better understanding of pre-eruptive history of large and potentially dangerous magma chambers. We present a comprehensive study of chemical diffusion across zoning and exsolution patterns of alkali feldspars in carbonatite-bearing cognate syenites from the 6.3 km3 (D.R.E) phonolitic Laacher See Tephra (LST) eruption 12.9 ka ago. The Laacher See volcano is located in the Quaternary East Eifel volcanic field of the Paleozoic Rhenish Massif in Western Germany and has produced a compositionally variable sequence in a single eruption from a magma chamber that was zoned from mafic phonolite at the base to highly evolved, actively degassing phonolite magma at the top. Diffusion chronometry is applied to major and trace element compositions obtained on alkali feldspars from carbonate-bearing syenitic cumulates. Methods used were laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) in combination with energy-dispersive and wavelength-dispersive electron microprobe analyses (EDS & WDS-EMPA). The grey scale values extracted from multiple accumulations of back-scattered electron images represent the K/Na ratio owing to the extremely low concentrations of Ba and Sr (<30 ppm). The numerical grey scale profiles and the quantitative compositional profiles are anatomized using three different fitting models in MATLAB®, Mathematica® and Origin® to estimate related time-scales with minimized error for a temperature range of 750 deg C to 800 deg C (on the basis of existing experimental data on phase transition and phase separation). A distinctive uphill diffusive analysis is used specifically for the phase separation in the case of exsolution features (comprising of albite- and orthoclase-rich phases) in sanidines. The error values are aggregates of propagated error through calculations and the uncertainty in temperature values. Trace element compositional data of distinct feldspar compositions that are assumed to have grown before and after silicate-carbonate unmixing are used to estimate partition coefficients between carbonate and silicate melt. The resulting values correlate well with available experimental data from the literature. We will present a genetic model based on the compositional data on feldspar zonation for the process and timing of silicate-carbonate unmixing prior to eruption of the host phonolite magma.
Revisiting the "Unmixing Experiment" through Argumentation
ERIC Educational Resources Information Center
Çoban, Gul Ünal; Büber, Ayse; Saglam, Merve Kocagül
2017-01-01
This paper focuses on a series of activities for students at middle school to college level, designed to instill a sound understanding of fluids and the properties of fluids. The first activities investigate diffusion and molecular size and these are followed by tasks exploring viscosity and the factors effecting viscosity. Following this, there…
Context Dependent Spectral Unmixing
2014-08-01
the tar- get sizes). The targets were made of 100% cotton fabric and were emplaced so that there would be representatives of each color type completely...method for simplex-based endmember extraction algorithm,” IEEE Transactions on Geoscience and Re- mote Sensing, vol. 44, no. 10, pp. 2804–2819, 2006. [68
Efficiency of static core turn-off in a system-on-a-chip with variation
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-29
A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.
NASA Astrophysics Data System (ADS)
Mikheeva, Anna; Moiseev, Pavel
2017-04-01
In mountain territories climate change affects forest productivity and growth, which results in the tree line advancing and increasing of the forest density. These changes pose new challenges for forest managers whose responsibilities include forest resources inventory, monitoring and protection of ecosystems, and assessment of forest vulnerability. These activities require a range of sources of information, including exact squares of forested areas, forest densities and species abundances. Picea obovata, dominant tree species in South-Ural State Natural Reserve, Russia has regenerated, propagated and increased its relative cover during the recent 70 years. A remarkable shift of the upper limit of Picea obovata up to 60-80 m upslope was registered by repeating photography, especially on gentle slopes. The stands of Picea obovata are monitored by Reserve inspectors on the test plots to ensure that forests maintain or improve their productivity, these studies also include projective cover measurements. However, it is impossible to cover the entire territory of the Reserve by detailed field observations. Remote sensing data from Terra ASTER imagery provides valuable information for large territories (scene covers an area of 60 x 60 km) and can be used for quantitative mapping of forest and non-forest vegetation at regional scale (spatial resolution is 15-30 m for visible and infrared bands). A case study of estimating Picea obovata abundance was conducted for forest and forest-tundra sites of Zigalga Range, using 9-band ASTER multispectral imagery of 23.08.2007, field data and spectral unmixing algorithm. This type of algorithms intends to derive object and its abundance from a mixed pixel of multispectral imagery which can be further converted to object's projective cover. Atmospheric correction was applied to the imagery prior to spectral unmixing, and then pure spectra of Picea obovata were extracted from the image in 10 points and averaged. These points located in Zigalga Range and were visited in summer 2016. We used Mixture-tuned Match Filtering (MTMF) algorithm, a non-linear subpixel classification technique which allows to separate the spectral mixture containing unknown objects, and to derive only known ones. The results of spectral unmixing classification were abundance maps of Picea obovata. The values were statistically determined (there was only selected abundances with high probabilities of presence and low probabilities of absence) and then constrained to the interval [0; 1]. Verification of maps was made at the sites of Iremel Mountains on the same ASTER image, where projective cover of Picea obovata was measured in the field in 147 points. The correlation coefficient between the spectral unmixing abundances and field-measured abundances was 0.7; not a very high value is due to the low sensitivity of the algorithm to detect abundances less than 0.25. The proposed method provides a tool for defining the Picea obovata boundaries more accurately than per-pixel automatic classification and locating new spruce islands in the mixing tree line environment. The abundances can be obtained for large areas with minimum field work which makes this approach cost-effective in providing timely information to nature reserve managers for adapting forest management actions to climate change.
Greenhouse gas emissions and land use change from Jatropha curcas-based jet fuel in Brazil.
Bailis, Robert E; Baka, Jennifer E
2010-11-15
This analysis presents a comparison of life-cycle GHG emissions from synthetic paraffinic kerosene (SPK) produced as jet fuel substitute from jatropha curcas feedstock cultivated in Brazil against a reference scenario of conventional jet fuel. Life cycle inventory data are derived from surveys of actual Jatropha growers and processors. Results indicate that a baseline scenario, which assumes a medium yield of 4 tons of dry fruit per hectare under drip irrigation with existing logistical conditions using energy-based coproduct allocation methodology, and assumes a 20-year plantation lifetime with no direct land use change (dLUC), results in the emissions of 40 kg CO₂e per GJ of fuel produced, a 55% reduction relative to conventional jet fuel. However, dLUC based on observations of land-use transitions leads to widely varying changes in carbon stocks ranging from losses in excess of 50 tons of carbon per hectare when Jatropha is planted in native cerrado woodlands to gains of 10-15 tons of carbon per hectare when Jatropha is planted in former agro-pastoral land. Thus, aggregate emissions vary from a low of 13 kg CO₂e per GJ when Jatropha is planted in former agro-pastoral lands, an 85% decrease from the reference scenario, to 141 kg CO₂e per GJ when Jatropha is planted in cerrado woodlands, a 60% increase over the reference scenario. Additional sensitivities are also explored, including changes in yield, exclusion of irrigation, shortened supply chains, and alternative allocation methodologies.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-07-11
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2016-05-31
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Calibrating thermal behavior of electronics
Chainer, Timothy J.; Parida, Pritish R.; Schultz, Mark D.
2017-01-03
A method includes determining a relationship between indirect thermal data for a processor and a measured temperature associated with the processor, during a calibration process, obtaining the indirect thermal data for the processor during actual operation of the processor, and determining an actual significant temperature associated with the processor during the actual operation using the indirect thermal data for the processor during actual operation of the processor and the relationship.
Analyses on Cost Reduction and CO2 Mitigation by Penetration of Fuel Cells to Residential Houses
NASA Astrophysics Data System (ADS)
Aki, Hirohisa; Yamamoto, Shigeo; Kondoh, Junji; Murata, Akinobu; Ishii, Itaru; Maeda, Tetsuhiko
This paper presents analyses on the penetration of polymer electrolyte fuel cells (PEFC) into a group of 10 residential houses and its effects of CO2 emission mitigation and consumers’ cost reduction in next 30 years. The price is considered to be reduced as the penetration progress which is expected to begin in near future. An experimental curve is assumed to express the decrease of the price. Installation of energy interchange systems which involve electricity, gas and hydrogen between a house which has a FC and contiguous houses is assumed to utilize both electricity and heat more efficiently, and to avoid start-stop operation of fuel processor (reformer) as much as possible. A multi-objective model which considers CO2 mitigation and consumers’ cost reduction is constructed and provided a Pareto optimum solution. A solution which simultaneously realizes both CO2 mitigation and consumers’ cost reduction appeared in the Pareto optimum solution. Strategies to reduce CO2 emission and consumers’ cost are suggested from the results of the analyses. The analyses also revealed that the energy interchange systems are effective especially in the early stage of the penetration.
Confined combustion of TNT explosion products in air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, J; Ferguson, R E; Forbes, J
1998-08-31
Effects of turbulent combustion induced by explosion of a 0.8 kg cylindrical charge of TNT in a 17 m 3 chamber filled with air, are investigated. The detonation wave in the charge transforms the solid explosive (C 7H 5N 3O 6) to gaseous products, rich (~20% each) in carbon dust and carbon monoxide. The detonation pressure (~210 kb) thereby engendered causes the products to expand rapidly, driving a blast wave into the surrounding air. The interface between the products and air, being essentially unstable as a consequence of strong acceleration to which it is subjected within the blast wave, evolvesmore » into a turbulent mixing layer-a process enhanced by shock reflections from the walls. Under such circumstances rapid combustion takes place where the expanded detonation products play the role of fuel. Its dynamic effect is manifested by the experimental measurement of ~3 bar pressure increase in the chamber, in contrast to ~1bar attained by a corresponding TNT explosion in nitrogen. The experiments were modeled as a turbulent combustion in an unmixed system at infinite Reynolds, Peclet and DamkGhler numbers. The CFD solution was obtained by a high-order Godunov scheme using an AMR (Adaptive Mesh Refinement) to trace the turbulent mixing on the computational grid in as much detail as possible. The evolution of the mass fraction of fuel consumed by combustion thus determined exhibited the properties of an exponential decay following a sharp initiation. The results reveal all the dynamic features of the exothermic process of combustion controlled by fluid mechanic transport in a highly turbulent field, in contrast to those elucidated by the conventional reaction-diffusion model.« less
Methods and systems for providing reconfigurable and recoverable computing resources
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.
Pile mixing increases greenhouse gas emissions during composting of dairy manure
USDA-ARS?s Scientific Manuscript database
The effect of pile mixing on greenhouse gas (GHG) emissions from stored dairy manure was determined using large flux chambers designed to completely cover pilot-scale manure piles. GHG emissions from piles that were mixed four times during the 80 day trial were about 20% higher than unmixed piles. ...
SOURCE APPORTIONMENT OF PM2.5 AT AN URBAN IMPROVE SITE IN SEATTLE, WA
The multivariate receptor models Positive Matrix Factorization (PMF) and Unmix were used along with EPA's Chemical Mass Balance model to deduce the sources of PM2.5 at a centrally located urban site in Seattle, Washington. A total of 289 filter samples were obtained with an IM...
Testing the Nonce Borrowing Hypothesis: Counter-Evidence from English-Origin Verbs in Welsh
ERIC Educational Resources Information Center
Stammers, Jonathan R.; Deuchar, Margaret
2012-01-01
According to the nonce borrowing hypothesis (NBH), "[n]once borrowings pattern exactly like their native counterparts in the (unmixed) recipient language" (Poplack & Meechan, 1998a, p. 137). Nonce borrowings (Sankoff, Poplack & Vanniarajan, 1990, p. 74) are "lone other-language items" which differ from established borrowings in terms of frequency…
SOURCE APPORTIONMENT OF SEATTLE PM 2.5: A COMPARISON OF IMPROVE AND ENHANCED STN DATA SETS
Seattle, WA, STN and IMPROVE data sets with STN temperature resolved carbon peaks were analyzed with both the PMF and Unmix receptor models. In addition, the IMPROVE trace element data was combined with the major STN species to examine the role of IMPROVE metals. To compare the ...
Postfire soil burn severity mapping with hyperspectral image unmixing
Peter R. Robichaud; Sarah A. Lewis; Denise Y. M. Laes; Andrew T. Hudak; Raymond F. Kokaly; Joseph A. Zamudio
2007-01-01
Burn severity is mapped after wildfires to evaluate immediate and long-term fire effects on the landscape. Remotely sensed hyperspectral imagery has the potential to provide important information about fine-scale ground cover components that are indicative of burn severity after large wildland fires. Airborne hyperspectral imagery and ground data were collected after...
Estimating the formation age distribution of continental crust by unmixing zircon ages
NASA Astrophysics Data System (ADS)
Korenaga, Jun
2018-01-01
Continental crust provides first-order control on Earth's surface environment, enabling the presence of stable dry landmasses surrounded by deep oceans. The evolution of continental crust is important for atmospheric evolution, because continental crust is an essential component of deep carbon cycle and is likely to have played a critical role in the oxygenation of the atmosphere. Geochemical information stored in the mineral zircon, known for its resilience to diagenesis and metamorphism, has been central to ongoing debates on the genesis and evolution of continental crust. However, correction for crustal reworking, which is the most critical step when estimating original formation ages, has been incorrectly formulated, undermining the significance of previous estimates. Here I suggest a simple yet promising approach for reworking correction using the global compilation of zircon data. The present-day distribution of crustal formation age estimated by the new "unmixing" method serves as the lower bound to the true crustal growth, and large deviations from growth models based on mantle depletion imply the important role of crustal recycling through the Earth history.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Multispectral analysis tools can increase utility of RGB color images in histology
NASA Astrophysics Data System (ADS)
Fereidouni, Farzad; Griffin, Croix; Todd, Austin; Levenson, Richard
2018-04-01
Multispectral imaging (MSI) is increasingly finding application in the study and characterization of biological specimens. However, the methods typically used come with challenges on both the acquisition and the analysis front. MSI can be slow and photon-inefficient, leading to long imaging times and possible phototoxicity and photobleaching. The resulting datasets can be large and complex, prompting the development of a number of mathematical approaches for segmentation and signal unmixing. We show that under certain circumstances, just three spectral channels provided by standard color cameras, coupled with multispectral analysis tools, including a more recent spectral phasor approach, can efficiently provide useful insights. These findings are supported with a mathematical model relating spectral bandwidth and spectral channel number to achievable spectral accuracy. The utility of 3-band RGB and MSI analysis tools are demonstrated on images acquired using brightfield and fluorescence techniques, as well as a novel microscopy approach employing UV-surface excitation. Supervised linear unmixing, automated non-negative matrix factorization and phasor analysis tools all provide useful results, with phasors generating particularly helpful spectral display plots for sample exploration.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-01-01
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733
Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwirth, P.N.; Lee, T.J.; Burne, R.V.
1993-03-01
A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less
CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Crist, Eric P.; Thelen, Brian J.; Carrara, David A.
1998-10-01
Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.
High efficiency organic photovoltaic cells employing hybridized mixed-planar heterojunctions
Xue, Jiangeng; Uchida, Soichi; Rand, Barry P.; Forrest, Stephen
2015-08-18
A device is provided, having a first electrode, a second electrode, and a photoactive region disposed between the first electrode and the second electrode. The photoactive region includes a first photoactive organic layer that is a mixture of an organic acceptor material and an organic donor material, wherein the first photoactive organic layer has a thickness not greater than 0.8 characteristic charge transport lengths; a second photoactive organic layer in direct contact with the first organic layer, wherein the second photoactive organic layer is an unmixed layer of the organic acceptor material of the first photoactive organic layer, and the second photoactive organic layer has a thickness not less than about 0.1 optical absorption lengths; and a third photoactive organic layer disposed between the first electrode and the second electrode and in direct contact with the first photoactive organic layer. The third photoactive organic layer is an unmixed layer of the organic donor layer of the first photoactive organic layer and has a thickness not less than about 0.1 optical absorption lengths.
NASA Astrophysics Data System (ADS)
Y Yang, M.; Wang, J.; Zhang, Q.
2017-07-01
Vegetation coverage is one of the most important indicators for ecological environment change, and is also an effective index for the assessment of land degradation and desertification. The dry-hot valley regions have sparse surface vegetation, and the spectral information about the vegetation in such regions usually has a weak representation in remote sensing, so there are considerable limitations for applying the commonly-used vegetation index method to calculate the vegetation coverage in the dry-hot valley regions. Therefore, in this paper, Alternating Angle Minimum (AAM) algorithm of deterministic model is adopted for selective endmember for pixel unmixing of MODIS image in order to extract the vegetation coverage, and accuracy test is carried out by the use of the Landsat TM image over the same period. As shown by the results, in the dry-hot valley regions with sparse vegetation, AAM model has a high unmixing accuracy, and the extracted vegetation coverage is close to the actual situation, so it is promising to apply the AAM model to the extraction of vegetation coverage in the dry-hot valley regions.
Evaluation of gasification and novel thermal processes for the treatment of municipal solid waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niessen, W.R.; Marks, C.H.; Sommerlad, R.E.
1996-08-01
This report identifies seven developers whose gasification technologies can be used to treat the organic constituents of municipal solid waste: Energy Products of Idaho; TPS Termiska Processor AB; Proler International Corporation; Thermoselect Inc.; Battelle; Pedco Incorporated; and ThermoChem, Incorporated. Their processes recover heat directly, produce a fuel product, or produce a feedstock for chemical processes. The technologies are on the brink of commercial availability. This report evaluates, for each technology, several kinds of issues. Technical considerations were material balance, energy balance, plant thermal efficiency, and effect of feedstock contaminants. Environmental considerations were the regulatory context, and such things as composition,more » mass rate, and treatability of pollutants. Business issues were related to likelihood of commercialization. Finally, cost and economic issues such as capital and operating costs, and the refuse-derived fuel preparation and energy c onversion costs, were considered. The final section of the report reviews and summarizes the information gathered during the study.« less
Goals of thermionic program for space power
NASA Technical Reports Server (NTRS)
English, R. E.
1981-01-01
The thermionic and Brayton reactor concepts were compared for application to space power. For a turbine inlet temperature of 15000 K the Brayton powerplant weighted 5 to 40% less than the thermionic concept. The out of core concept separates the thermionic converters from their reactor. Technical risks are diminished by: (1) moving the insolator out of the reactor; (2) allowing a higher thermal flux for the thermionic converters than is required of the reactor fuel; and (3) eliminating fuel swelling's threat against lifetime of the thermionic converters. Overall performance can be improved by including power processing in system optimization for design and technology on more efficient, higher temperature power processors. The thermionic reactors will be larger than those for competitive systems with higher conversion efficiency and lower reactor operating temperatures. It is concluded that although the effect of reactor size on shield weight will be modest for unmanned spacecraft, the penalty in shield weight will be large for manned or man-tended spacecraft.
Rectangular Array Of Digital Processors For Planning Paths
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.
1993-01-01
Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.
Molybdenum dioxide-based anode for solid oxide fuel cell applications
NASA Astrophysics Data System (ADS)
Kwon, Byeong Wan; Ellefson, Caleb; Breit, Joe; Kim, Jinsoo; Grant Norton, M.; Ha, Su
2013-12-01
The present paper describes the fabrication and performance of a molybdenum dioxide (MoO2)-based anode for liquid hydrocarbon/oxygenated hydrocarbon-fueled solid oxide fuel cells (SOFCs). These fuel cells first internally reform the complex liquid fuel into carbon fragments and hydrogen, which are then electrochemically oxidized to produce electrical energy without external fuel processors. The MoO2-based anode was fabricated on to an yttria-stabilized zirconia (YSZ) electrolyte via combined electrostatic spray deposition (ESD) and direct painting methods. The cell performance was measured by directly feeding liquid fuels such as n-dodecane (i.e., a model diesel/kerosene fuel) or biodiesel (i.e., a future biomass-based liquid fuel) to the MoO2-based anode at 850 °C. The maximum initial power densities obtained from our MoO2-based SOFC were 34 mW cm-2 and 45 mW cm-2 using n-dodecane and biodiesel, respectively. The initial power density of the MoO2-based SOFC was improved up to 2500 mW cm-2 by optimizing the porosity of the MoO2-based anode. To test the long-term stability of the MoO2-based anode SOFC against coking, n-dodecane was continuously fed into the cell for 24 h at the open circuit voltage (OCV). During long-term testing, voltage-current density (V-I) plots were periodically obtained and they showed no significant changes over the operation time. Microstructural examination of the tested cells indicated that the MoO2-based anode displayed negligible coke formation, which explains its stability. On the other hand, SOFCs with conventional nickel (Ni)-based anodes under the same operating conditions showed a significant amount of coke formation on the metal surface, which led to a rapid drop in cell performance. Hence, the present work demonstrates that MoO2-based anodes exhibit outstanding tolerance to coke formation. This result opens up the opportunity for more efficiently generating electrical energy from both existing transportation and next generation biomass-derived liquid fuels using liquid hydrocarbon/oxygenated hydrocarbon-fueled SOFCs.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Performance analysis of vortex based mixers for confined flows
NASA Astrophysics Data System (ADS)
Buschhagen, Timo
The hybrid rocket is still sparsely employed within major space or defense projects due to their relatively poor combustion efficiency and low fuel grain regression rate. Although hybrid rockets can claim advantages in safety, environmental and performance aspects against established solid and liquid propellant systems, the boundary layer combustion process and the diffusion based mixing within a hybrid rocket grain port leaves the core flow unmixed and limits the system performance. One principle used to enhance the mixing of gaseous flows is to induce streamwise vorticity. The counter-rotating vortex pair (CVP) mixer utilizes this principle and introduces two vortices into a confined flow, generating a stirring motion in order to transport near wall media towards the core and vice versa. Recent studies investigated the velocity field introduced by this type of swirler. The current work is evaluating the mixing performance of the CVP concept, by using an experimental setup to simulate an axial primary pipe flow with a radially entering secondary flow. Hereby the primary flow is altered by the CVP swirler unit. The resulting setup therefore emulates a hybrid rocket motor with a cylindrical single port grain. In order to evaluate the mixing performance the secondary flow concentration at the pipe assembly exit is measured, utilizing a pressure-sensitive paint based procedure.
Stojić, A; Stojić, S Stanišić; Šoštarić, A; Ilić, L; Mijić, Z; Rajšić, S
2015-09-01
In this study, the concentrations of volatile organic compounds were measured by the use of proton transfer reaction mass spectrometry, together with NO x , NO, NO2, SO2, CO and PM10 and meteorological parameters in an urban area of Belgrade during winter 2014. The multivariate receptor model US EPA Unmix was applied to the obtained dataset resolving six source profiles, which can be attributed to traffic-related emissions, gasoline evaporation/oil refineries, petrochemical industry/biogenic emissions, aged plumes, solid-fuel burning and local laboratories. Besides the vehicle exhaust, accounting for 27.6 % of the total mixing ratios, industrial emissions, which are present in three out of six resolved profiles, exert a significant impact on air quality in the urban area. The major contribution of regional and long-range transport was determined for source profiles associated with petrochemical industry/biogenic emissions (40 %) and gasoline evaporation/oil refineries (29 %) using trajectory sector analysis. The concentration-weighted trajectory model was applied with the aim of resolving the spatial distribution of potential distant sources, and the results indicated that emission sources from neighbouring countries, as well as from Slovakia, Greece, Poland and Scandinavian countries, significantly contribute to the observed concentrations.
NASA Technical Reports Server (NTRS)
Seale, R. H.
1979-01-01
The prediction of the SRB and ET impact areas requires six separate processors. The SRB impact prediction processor computes the impact areas and related trajectory data for each SRB element. Output from this processor is stored on a secure file accessible by the SRB impact plot processor which generates the required plots. Similarly the ET RTLS impact prediction processor and the ET RTLS impact plot processor generates the ET impact footprints for return-to-launch-site (RTLS) profiles. The ET nominal/AOA/ATO impact prediction processor and the ET nominal/AOA/ATO impact plot processor generate the ET impact footprints for non-RTLS profiles. The SRB and ET impact processors compute the size and shape of the impact footprints by tabular lookup in a stored footprint dispersion data base. The location of each footprint is determined by simulating a reference trajectory and computing the reference impact point location. To insure consistency among all flight design system (FDS) users, much input required by these processors will be obtained from the FDS master data base.
Design of stationary PEFC system configurations to meet heat and power demands
NASA Astrophysics Data System (ADS)
Wallmark, Cecilia; Alvfors, Per
This paper presents heat and power efficiencies of a modeled PEFC system and the methods used to create the system configuration. The paper also includes an example of a simulated fuel cell system supplying a building in Sweden with heat and power. The main method used to create an applicable fuel cell system configuration is pinch technology. This technology is used to evaluate and design a heat exchanger network for a PEFC system working under stationary conditions, in order to find a solution with high heat utilization. The heat exchanger network in the system connecting the reformer, the burner, gas cleaning, hot-water storage and the PEFC stack will affect the heat transferred to the hot-water storage and thereby the heating of the building. The fuel, natural gas, is reformed to a hydrogen-rich gas within a slightly pressurized system. The fuel processor investigated is steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation. The system is connected to the electrical grid for backup and peak demands and to a hot-water storage to meet the varying heat demand for the building. The procedure for designing the fuel cell system installation as co-generation system is described, and the system is simulated for a specific building in Sweden during 1 year. The results show that the fuel cell system in combination with a burner and hot-water storage could supply the building with the required heat without exceeding any of the given limitations. The designed co-generation system will provide the building with most of its power requirements and would further generate income by sale of electricity to the power grid.
Sahu, Manoranjan; Hu, Shaohua; Ryan, Patrick H; Le Masters, Grace; Grinshpun, Sergey A; Chow, Judith C; Biswas, Pratim
2011-06-01
Exposure to traffic-related pollution during childhood has been associated with asthma exacerbation, and asthma incidence. The objective of the Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS) is to determine if the development of allergic and respiratory disease is associated with exposure to diesel engine exhaust particles. A detailed receptor model analyses was undertaken by applying positive matrix factorization (PMF) and UNMIX receptor models to two PM₂.₅ data sets: one consisting of two carbon fractions and the other of eight temperature-resolved carbon fractions. Based on the source profiles resolved from the analyses, markers of traffic-related air pollution were estimated: the elemental carbon attributed to traffic (ECAT) and elemental carbon attributed to diesel vehicle emission (ECAD). Application of UNMIX to the two data sets generated four source factors: combustion related sulfate, traffic, metal processing and soil/crustal. The PMF application generated six source factors derived from analyzing two carbon fractions and seven factors from temperature-resolved eight carbon fractions. The source factors (with source contribution estimates by mass concentrations in parentheses) are: combustion sulfate (46.8%), vegetative burning (15.8%), secondary sulfate (12.9%), diesel vehicle emission (10.9%), metal processing (7.5%), gasoline vehicle emission (5.6%) and soil/crustal (0.7%). Diesel and gasoline vehicle emission sources were separated using eight temperature-resolved organic and elemental carbon fractions. Application of PMF to both datasets also differentiated the sulfate rich source from the vegetative burning source, which are combined in a single factor by UNMIX modeling. Calculated ECAT and ECAD values at different locations indicated that traffic source impacts depend on factors such as traffic volumes, meteorological parameters, and the mode of vehicle operation apart from the proximity of the sites to highways. The difference in ECAT and ECAD, however, was less than one standard deviation. Thus, a cost benefit consideration should be used when deciding on the benefits of an eight or two carbon approach. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Jawin, E. R.; Head, J. W., III; Cannon, K.
2017-12-01
The Aristarchus pyroclastic deposit in central Oceanus Procellarum is understood to have formed in a gas-rich explosive volcanic eruption, and has been observed to contain abundant volcanic glass. However, the interpreted color (and therefore composition) of the glass has been debated. In addition, previous analyses of the pyroclastic deposit have been performed using lower resolution data than are currently available. In this work, a nonlinear spectral unmixing model was applied to Moon Mineralogy Mapper (M3) data of the Aristarchus plateau to investigate the detailed mineralogic and crystalline nature of the Aristarchus pyroclastic deposit by using spectra of laboratory endmembers including a suite of volcanic glasses returned from the Apollo 15 and 17 missions (green, orange, black beads), as well as synthetic lunar glasses (orange, green, red, yellow). Preliminary results of the M3 unmixing model suggest that spectra of the pyroclastic deposit can be modeled by a mixture composed predominantly of a featureless endmember approximating space weathering and a smaller component of glass. The modeled spectra were most accurate with a synthetic orange glass endmember, relative to the other glasses analyzed in this work. The results confirm that there is a detectable component of glass in the Aristarchus pyroclastic deposit which may be similar to the high-Ti orange glass seen in other regional pyroclastic deposits, with only minimal contributions of other crystalline minerals. The presence of volcanic glass in the pyroclastic deposit, with the low abundance of crystalline material, would support the model that the Aristarchus pyroclastic deposit formed in a long-duration, hawaiian-style fire fountain eruption. No significant detection of devitrified black beads in the spectral modeling results (as was observed at the Apollo 17 landing site in the Taurus-Littrow pyroclastic deposit), suggests the optical density of the eruptive plume remained low throughout the eruption.
Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data
NASA Astrophysics Data System (ADS)
Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin
2017-04-01
Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide critical information for urban planners and anthropogenic studies globally.
Chen, L-W Antony; Watson, John G; Chow, Judith C; DuBois, Dave W; Herschberger, Lisa
2011-11-01
Chemical mass balance (CMB) and trajectory receptor models were applied to speciated particulate matter with aerodynamic diameter ≤2.5 μm (PM 2.5 ) measurements from Speciation Trends Network (STN; part of the Chemical Speciation Network [CSN]) and Interagency Monitoring of Protected Visual Environments (IMPROVE) monitoring network across the state of Minnesota as part of the Minnesota PM 2.5 Source Apportionment Study (MPSAS). CMB equations were solved by the Unmix, positive matrix factorization (PMF), and effective variance (EV) methods, giving collective source contribution and uncertainty estimates. Geological source profiles developed from local dust materials were either incorporated into the EV-CMB model or used to verify factors derived from Unmix and PMF. Common sources include soil dust, calcium (Ca)-rich dust, diesel and gasoline vehicle exhausts, biomass burning, secondary sulfate, and secondary nitrate. Secondary sulfate and nitrate aerosols dominate PM 2.5 mass (50-69%). Mobile sources outweigh area sources at urban sites, and vice versa at rural sites due to traffic emissions. Gasoline and diesel contributions can be separated using data from the STN, despite significant uncertainties. Major differences between MPSAS and earlier studies on similar environments appear to be the type and magnitude of stationary sources, but these sources are generally minor (<7%) in this and other studies. Ensemble back-trajectory analysis shows that the lower Midwestern states are the predominant source region for secondary ammoniated sulfate in Minnesota. It also suggests substantial contributions of biomass burning and soil dust from out-of-state on occasions, although a quantitative separation of local and regional contributions was not achieved in the current study. Supplemental materials are available for this article. Go to the publisher's online edition of the Journal of the Air & Waste Management Association for a summary of input data, Unmix and PMF factor profiles, and additional maps. [Box: see text].
Zhou, Liqing; Lu, Jia; Chen, Guopeng; Dong, Li; Yao, Yujia
2017-01-01
Background/Study Context: Socioemotional selectivity theory (SST) states that the positivity effect is a result of older adults' emotion regulation and that older adults derive more emotional satisfaction from prioritizing positive information processing. The authors explored whether the positivity effect appeared when the negative aging stereotype was activated in older adults and also whether the effect differed between mixed and unmixed valence conditions. Sixty younger (18-23 years of age) and 60 older (60-87 years of age) adults were randomly assigned to a control group and a priming group, in which the negative aging stereotype was activated. All the participants were asked to select 15 words that best described the elderly from a mixed-word list (positive and negative words were mixed together) and from an unmixed-word list (positive and negative words were separated). Older adults in the control group selected more positive words, whereas among younger adults, selection did not differ by valence in either the mixed- or unmixed-word list conditions. There were no differences between the positive and negative word choices of the younger and older adults in the priming group. We calculated the differences between the numbers of positive and negative words, and the differences in the older adults' word choices were larger than those among the younger adults; the differences were also larger in the control group than in the priming group. The positivity effect worked by choosing positive stimuli rather than avoiding negative stimuli. The role of emotion regulation in older adults was limited, and when the positivity effect faced the effect of the negative aging stereotype, the negative stereotype effect was dominant. Future research should explore the changes in the positivity effect in the face of a positive aging stereotype and what roles other factors (e.g., activation level of the stereotype, arousal level of affective words) might play.
Aldega, L.; Eberl, D.D.
2005-01-01
Illite crystals in siliciclastic sediments are heterogeneous assemblages of detrital material coming from various source rocks and, at paleotemperatures >70 ??C, of superimposed diagenetic modification in the parent sediment. We distinguished the relative proportions of 2M1 detrital illite and possible diagenetic 1Md + 1M illite by a combined analysis of crystal-size distribution and illite polytype quantification. We found that the proportions of 1Md + 1M and 2M1 illite could be determined from crystallite thickness measurements (BWA method, using the MudMaster program) by unmixing measured crystallite thickness distributions using theoretical and calculated log-normal and/or asymptotic distributions. The end-member components that we used to unmix the measured distributions were three asymptotic-shaped distributions (assumed to be the diagenetic component of the mixture, the 1Md + 1M polytypes) calculated using the Galoper program (Phase A was simulated using 500 crystals per cycle of nucleation and growth, Phase B = 333/cycle, and Phase C = 250/ cycle), and one theoretical log-normal distribution (Phase D, assumed to approximate the detrital 2M1 component of the mixture). In addition, quantitative polytype analysis was carried out using the RockJock software for comparison. The two techniques gave comparable results (r2 = 0.93), which indicates that the unmixing method permits one to calculate the proportion of illite polytypes and, therefore, the proportion of 2M1 detrital illite, from crystallite thickness measurements. The overall illite crystallite thicknesses in the samples were found to be a function of the relative proportions of thick 2M1 and thin 1Md + 1M illite. The percentage of illite layers in I-S mixed layers correlates with the mean crystallite thickness of the 1Md + 1M polytypes, indicating that these polytypes, rather than the 2M1 polytype, participate in I-S mixed layering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuntanoo, K., E-mail: thip-kk@hotmail.com; Promkotra, S., E-mail: sarunya@kku.ac.th; Kaewkannetra, P., E-mail: paknar@kku.ac.th
A biopolymer of polyhydroxybutyrate-co-hydroxyvalerate (PHBV) is blended with bio-based materials, natural rubber latex, to improve their microstructures. The various ratios between PHBV and natural rubber latex are examined to develop their mechanical properties. In general, physical properties of PHBV are hard, brittle and low flexible while natural rubber (NR) is presented itself as high elastic materials. Concentrations of the PHBV solution are constituted at 1%, 2% and 3% (w/v). The mixtures of their PHBV solutions to natural rubber latex are produced the blended films in three different ratios of 4:6, 5:5 and 6:4, respectively. They are characterized by appearance analysesmore » which are the scanning electron microscope (SEM), universal testing machine (UTM) and differential scanning calorimetry (DSC). The SEM photomicrographs of the blended films and the controlled PHBV can provide the void distribution in the range of 12-14% and 19-21%, respectively. For mechanical properties of the blended films, the various elastic moduli of 1%, 2% and 3% (w/v) PHBV are the average of 773, 956 and 1,007 kPa, respectively. The tensile strengths of the blends increase with the increased concentrations of PHBV, similarly trend to the elastic modulus. The crystallization and melting behavior of unmixed PHBV and the blends are determined by DSC. Melting transition temperatures (T{sub m}) of the unmixed PHBV are stated two melting peak at 154°C and 173°C. Besides, the melting peaks of the blends alter in the range of 152-156°C and 168-171°C, respectively. According to morphology of the blends, the void distribution decreases twice compared to the unmixed PHBV. The results of mechanical properties and thermal analysis indicate that the blended PHBV can be developed their properties by more resilient and wide range of temperature than usual.« less
Thermo-mechanical properties of carbon nanotubes and applications in thermal management
NASA Astrophysics Data System (ADS)
Nguyen, Manh Hong; Thang Bui, Hung; Trinh Pham, Van; Phan, Ngoc Hong; Nguyen, Tuan Hong; Chuc Nguyen, Van; Quang Le, Dinh; Khoi Phan, Hong; Phan, Ngoc Minh
2016-06-01
Thanks to their very high thermal conductivity, high Young’s modulus and unique tensile strength, carbon nanotubes (CNTs) have become one of the most suitable nano additives for heat conductive materials. In this work, we present results obtained for the synthesis of heat conductive materials containing CNT based thermal greases, nanoliquids and lubricating oils. These synthesized heat conductive materials were applied to thermal management for high power electronic devices (CPUs, LEDs) and internal combustion engines. The simulation and experimental results on thermal greases for an Intel Pentium IV processor showed that the thermal conductivity of greases increases 1.4 times and the saturation temperature of the CPU decreased by 5 °C by using thermal grease containing 2 wt% CNTs. Nanoliquids containing CNT based distilled water/ethylene glycol were successfully applied in heat dissipation for an Intel Core i5 processor and a 450 W floodlight LED. The experimental results showed that the saturation temperature of the Intel Core i5 processor and the 450 W floodlight LED decreased by about 6 °C and 3.5 °C, respectively, when using nanoliquids containing 1 g l-1 of CNTs. The CNTs were also effectively utilized additive materials for the synthesis of lubricating oils to improve the thermal conductivity, heat dissipation efficiency and performance efficiency of engines. The experimental results show that the thermal conductivity of lubricating oils increased by 12.5%, the engine saved 15% fuel consumption, and the longevity of the lubricating oil increased up to 20 000 km by using 0.1% vol. CNTs in the lubricating oils. All above results have confirmed the tremendous application potential of heat conductive materials containing CNTs in thermal management for high power electronic devices, internal combustion engines and other high power apparatus.
Coding, testing and documentation of processors for the flight design system
NASA Technical Reports Server (NTRS)
1980-01-01
The general functional design and implementation of processors for a space flight design system are briefly described. Discussions of a basetime initialization processor; conic, analytical, and precision coasting flight processors; and an orbit lifetime processor are included. The functions of several utility routines are also discussed.
The computational structural mechanics testbed generic structural-element processor manual
NASA Technical Reports Server (NTRS)
Stanley, Gary M.; Nour-Omid, Shahram
1990-01-01
The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1994-01-01
In a computer having a large number of single-instruction multiple data (SIMD) processors, each of the SIMD processors has two sets of three individual processor elements controlled by a master control unit and interconnected among a plurality of register file units where data is stored. The register files input and output data in synchronism with a minor cycle clock under control of two slave control units controlling the register file units connected to respective ones of the two sets of processor elements. Depending upon which ones of the register file units are enabled to store or transmit data during a particular minor clock cycle, the processor elements within an SIMD processor are connected in rings or in pipeline arrays, and may exchange data with the internal bus or with neighboring SIMD processors through interface units controlled by respective ones of the two slave control units.
Karasick, Michael S.; Strip, David R.
1996-01-01
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.
Switch for serial or parallel communication networks
Crosette, D.B.
1994-07-19
A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.
Switch for serial or parallel communication networks
Crosette, Dario B.
1994-01-01
A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.
Unmixing the Mixing Cost: Contributions from Dimensional Relevance and Stimulus-Response Suppression
ERIC Educational Resources Information Center
Mari-Beffa, Paloma; Cooper, Stephen; Houghton, George
2012-01-01
When participants repeat the same task in a context in which the task may also switch (a mixed block), performance deteriorates compared to when there is only one task repeating (a pure block). Three experiments were designed to assess how perceptual and motor transitions influenced this mixing cost. Experiment 1 provided three pure block…
A cross-comparison of field, spectral, and lidar estimates of forest canopy cover
Alistair M. S. Smith; Michael J. Falkowski; Andrew T. Hudak; Jeffrey S. Evans; Andrew P. Robinson; Caiti M. Steele
2010-01-01
A common challenge when comparing forest canopy cover and similar metrics across different ecosystems is that there are many field- and landscape-level measurement methods. This research conducts a cross-comparison and evaluation of forest canopy cover metrics produced using unmixing of reflective spectral satellite data, light detection and ranging (lidar) data, and...
Biomass and health based forest cover delineation using spectral un-mixing
Mohan Tiruveedhula; Joseph Fan; Ravi R. Sadasivuni; Surya S. Durbha; David L. Evans
2009-01-01
Remote sensing is a well-suited source of information on various forest characteristics such as forest cover type, leaf area, biomass, and health. The use of appropriate layers helps to quantify the variables of interest. For example, normalized difference vegetation index (NDVI) and greenness help explain variability in biomass as well as health of forests....
7 CFR 201.60 - Purity percentages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (2) mixtures in which the particle-weight ratio is 1:1 to 1.49:1, inclusive. Tolerances for... Component of a Purity Analysis for (1) Unmixed Seed or (2) Mixed Seed in Which the Particle Weight Ratio Is... particle-weight ratio is 1.5:1 to 20:1 and beyond: The symbols used in the formula are as follows: T...
7 CFR 201.60 - Purity percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (2) mixtures in which the particle-weight ratio is 1:1 to 1.49:1, inclusive. Tolerances for... Component of a Purity Analysis for (1) Unmixed Seed or (2) Mixed Seed in Which the Particle Weight Ratio Is... particle-weight ratio is 1.5:1 to 20:1 and beyond: The symbols used in the formula are as follows: T...
7 CFR 201.60 - Purity percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (2) mixtures in which the particle-weight ratio is 1:1 to 1.49:1, inclusive. Tolerances for... Component of a Purity Analysis for (1) Unmixed Seed or (2) Mixed Seed in Which the Particle Weight Ratio Is... particle-weight ratio is 1.5:1 to 20:1 and beyond: The symbols used in the formula are as follows: T...
7 CFR 201.60 - Purity percentages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (2) mixtures in which the particle-weight ratio is 1:1 to 1.49:1, inclusive. Tolerances for... Component of a Purity Analysis for (1) Unmixed Seed or (2) Mixed Seed in Which the Particle Weight Ratio Is... particle-weight ratio is 1.5:1 to 20:1 and beyond: The symbols used in the formula are as follows: T...
Conditions for space invariance in optical data processors used with coherent or noncoherent light.
Arsenault, H R
1972-10-01
The conditions for space invariance in coherent and noncoherent optical processors are considered. All linear optical processors are shown to belong to one of two types. The conditions for space invariance are more stringent for noncoherent processors than for coherent processors, so that a system that is linear in coherent light may be nonlinear in noncoherent light. However, any processor that is linear in noncoherent light is also linear in the coherent limit.
Broadcasting collective operation contributions throughout a parallel computer
Faraj, Ahmad [Rochester, MN
2012-02-21
Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.
LANDSAT-D flight segment operations manual. Appendix B: OBC software operations
NASA Technical Reports Server (NTRS)
Talipsky, R.
1981-01-01
The LANDSAT 4 satellite contains two NASA standard spacecraft computers and 65,536 words of memory. Onboard computer software is divided into flight executive and applications processors. Both applications processors and the flight executive use one or more of 67 system tables to obtain variables, constants, and software flags. Output from the software for monitoring operation is via 49 OBC telemetry reports subcommutated in the spacecraft telemetry. Information is provided about the flight software as it is used to control the various spacecraft operations and interpret operational OBC telemetry. Processor function descriptions, processor operation, software constraints, processor system tables, processor telemetry, and processor flow charts are presented.
NASA Astrophysics Data System (ADS)
Pruhs, Kirk
A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Simulink/PARS Integration Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vacaliuc, B.; Nakhaee, N.
2013-12-18
The state of the art for signal processor hardware has far out-paced the development tools for placing applications on that hardware. In addition, signal processors are available in a variety of architectures, each uniquely capable of handling specific types of signal processing efficiently. With these processors becoming smaller and demanding less power, it has become possible to group multiple processors, a heterogeneous set of processors, into single systems. Different portions of the desired problem set can be assigned to different processor types as appropriate. As software development tools do not keep pace with these processors, especially when multiple processors ofmore » different types are used, a method is needed to enable software code portability among multiple processors and multiple types of processors along with their respective software environments. Sundance DSP, Inc. has developed a software toolkit called “PARS”, whose objective is to provide a framework that uses suites of tools provided by different vendors, along with modeling tools and a real time operating system, to build an application that spans different processor types. The software language used to express the behavior of the system is a very high level modeling language, “Simulink”, a MathWorks product. ORNL has used this toolkit to effectively implement several deliverables. This CRADA describes this collaboration between ORNL and Sundance DSP, Inc.« less
NASA Astrophysics Data System (ADS)
Esepkina, N. A.; Lavrov, A. P.; Anan'ev, M. N.; Blagodarnyi, V. S.; Ivanov, S. I.; Mansyrev, M. I.; Molodyakov, S. A.
1995-10-01
Two new types of optoelectronic radio-signal processors were investigated. Charge-coupled device (CCD) photodetectors are used in these processors under continuous scanning conditions, i.e. in a time delay and storage mode. One of these processors is based on a CCD photodetector array with a reference-signal amplitude transparency and the other is an adaptive acousto-optical signal processor with linear frequency modulation. The processor with the transparency performs multichannel discrete—analogue convolution of an input signal with a corresponding kernel of the transformation determined by the transparency. If a light source is an array of light-emitting diodes of special (stripe) geometry, the optical stages of the processor can be made from optical fibre components and the whole processor then becomes a rigid 'sandwich' (a compact hybrid optoelectronic microcircuit). A report is given also of a study of a prototype processor with optical fibre components for the reception of signals from a system with antenna aperture synthesis, which forms a radio image of the Earth.
Karasick, M.S.; Strip, D.R.
1996-01-30
A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G.; Salapura, Valentina
2012-07-24
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.
Analysis on electronic control unit of continuously variable transmission
NASA Astrophysics Data System (ADS)
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Vehicle safety telemetry for automated highways
NASA Technical Reports Server (NTRS)
Hansen, G. R.
1977-01-01
The emphasis in current, automatic vehicle testing and diagnosis is primarily centered on the proper operation of the engine. Lateral and longitudinal guidance technologies, including speed control and headway sensing for collision avoidance, are reviewed. The principal guidance technique remains the buried wire. Speed control and headway sensing, even though they show the same basic elements in braking and fuel systems, are proceeding independently. The applications of on-board electronic and microprocessor techniques were investigated; each application (emission control, spark advance, or anti-slip braking) is being treated as an independent problem is proposed. A unified bus system of distributed processors for accomplishing the various functions and testing required for vehicles equipped to use automated highways.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
Ordering of guarded and unguarded stores for no-sync I/O
Gara, Alan; Ohmacht, Martin
2013-06-25
A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue.
Electrochemical sensing using voltage-current time differential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay
2017-02-28
A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
Testing and operating a multiprocessor chip with processor redundancy
Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J
2014-10-21
A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, D.A.; Grunwald, D.C.
The spectrum of parallel processor designs can be divided into three sections according to the number and complexity of the processors. At one end there are simple, bit-serial processors. Any one of thee processors is of little value, but when it is coupled with many others, the aggregate computing power can be large. This approach to parallel processing can be likened to a colony of termites devouring a log. The most notable examples of this approach are the NASA/Goodyear Massively Parallel Processor, which has 16K one-bit processors, and the Thinking Machines Connection Machine, which has 64K one-bit processors. At themore » other end of the spectrum, a small number of processors, each built using the fastest available technology and the most sophisticated architecture, are combined. An example of this approach is the Cray X-MP. This type of parallel processing is akin to four woodmen attacking the log with chainsaws.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay
2018-01-02
A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms.more » The processor also outputs the determined value.« less
Puri, S; Singh, A; Yashik
2010-01-01
Globalisation has given birth to medical tourism. Health and medical tourism are the fastest growing segments in not only developed nations but in developing countries too. India has become a hot destination, as the Indian medical standards match up to the highly prescribed international standards at a very low cost. However, it is an unmixed blessing; along with advantages, it has many unintended side effects also.
Joseph R. Samaniuk; C. Tim Scott; Thatcher W. Root; Daniel J. Klingenberg
2011-01-01
Enzymatic hydrolysis of lignocellulosic biomass in a high shear environment was examined. The conversion of cellulose to glucose in samples mixed in a torque rheometer producing shear flows similar to those found in twin screw extruders was greater than that of unmixed samples. In addition, there is a synergistic effect of mixing and enzymatic hydrolysis; mixing...
High spatial resolution spectral unmixing for mapping ash species across a complex urban environment
Jennifer Pontius; Ryan P. Hanavan; Richard A. Hallett; Bruce D. Cook; Lawrence A. Corp
2017-01-01
Ash (Fraxinus L.) species are currently threatened by the emerald ash borer (EAB; Agrilus planipennis Fairmaire) across a growing area in the eastern US. Accurate mapping of ash species is required to monitor the host resource, predict EAB spread and better understand the short- and long-term effects of EAB on the ash resource...
NASA Technical Reports Server (NTRS)
Hodgdon, R. B.; Waite, W. A.; Alexander, S. S.
1984-01-01
Two polymer ion exchange membranes were synthesized to fulfill the needs of both electrical resistivity and anolyte/catholyte separation for utility load leveling utilizing the DOE/NASA mixed electrolyte REDOX battery. Both membranes were shown to meet mixed electrolyte utility load leveling criteria. Several modifications of an anion exchange membrane failed to meet utility load leveling REDOX battery criteria using the unmixed electrolyte REDOX cell.
Van de Voorde, Tim; Vlaeminck, Jeroen; Canters, Frank
2008-01-01
Urban growth and its related environmental problems call for sustainable urban management policies to safeguard the quality of urban environments. Vegetation plays an important part in this as it provides ecological, social, health and economic benefits to a city's inhabitants. Remotely sensed data are of great value to monitor urban green and despite the clear advantages of contemporary high resolution images, the benefits of medium resolution data should not be discarded. The objective of this research was to estimate fractional vegetation cover from a Landsat ETM+ image with sub-pixel classification, and to compare accuracies obtained with multiple stepwise regression analysis, linear spectral unmixing and multi-layer perceptrons (MLP) at the level of meaningful urban spatial entities. Despite the small, but nevertheless statistically significant differences at pixel level between the alternative approaches, the spatial pattern of vegetation cover and estimation errors is clearly distinctive at neighbourhood level. At this spatially aggregated level, a simple regression model appears to attain sufficient accuracy. For mapping at a spatially more detailed level, the MLP seems to be the most appropriate choice. Brightness normalisation only appeared to affect the linear models, especially the linear spectral unmixing. PMID:27879914
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus
Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Hybrid Electro-Optic Processor
1991-07-01
This report describes the design of a hybrid electro - optic processor to perform adaptive interference cancellation in radar systems. The processor is...modulator is reported. Included is this report is a discussion of the design, partial fabrication in the laboratory, and partial testing of the hybrid electro ... optic processor. A follow on effort is planned to complete the construction and testing of the processor. The work described in this report is the
JPRS Report, Science & Technology, Europe.
1991-04-30
processor in collaboration with Intel . The processor , christened Touchstone, will be used as the core of a parallel computer with 2,000 processors . One of...ELECTRONIQUE HEBDO in French 24 Jan 91 pp 14-15 [Article by Claire Remy: "Everything Set for Neural Signal Processors " first paragraph is ELECTRONIQUE...paving the way for neural signal processors in so doing. The principal advantage of this specific circuit over a neuromimetic software program is
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
The CSM testbed matrix processors internal logic and dataflow descriptions
NASA Technical Reports Server (NTRS)
Regelbrugge, Marc E.; Wright, Mary A.
1988-01-01
This report constitutes the final report for subtask 1 of Task 5 of NASA Contract NAS1-18444, Computational Structural Mechanics (CSM) Research. This report contains a detailed description of the coded workings of selected CSM Testbed matrix processors (i.e., TOPO, K, INV, SSOL) and of the arithmetic utility processor AUS. These processors and the current sparse matrix data structures are studied and documented. Items examined include: details of the data structures, interdependence of data structures, data-blocking logic in the data structures, processor data flow and architecture, and processor algorithmic logic flow.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
NASA Astrophysics Data System (ADS)
Hammac, W. A.; Pan, W.; Koenig, R. T.; McCracken, V.
2012-12-01
The Environmental Protection Agency (EPA) has mandated through the second renewable fuel standard (RFS2) that biodiesel meet a minimum threshold requirement (50% reduction) for greenhouse gas (GHG) emission reduction compared to fossil diesel. This designation is determined by life cycle assessment (LCA) and carries with it potential for monetary incentives for biodiesel feedstock growers (Biomass Crop Assistance Program) and biodiesel processors (Renewable Identification Numbers). A national LCA was carried out for canola (Brassica napus) biodiesel feedstock by the EPA and it did meet the minimum threshold requirement. However, EPA's national LCA does not provide insight into regional variation in GHG mitigation. The authors propose for full GHG reduction potential of biofuels to be realized, LCA results must have regional specificity and should inform incentives for growers and processors on a regional basis. The objectives of this work were to determine (1) variation in biofuel feedstock production related GHG emissions between three agroecological zones (AEZs) in eastern Washington State (2) the impact of nitrogen use efficiency (NUE) on GHG mitigation potential for each AEZ and (3) the impact of incentives on adoption of oilseed production. Results from objective (1) revealed there is wide variability in range for GHG estimates both across and within AEZs based on variation in farming practices and environment. It is expected that results for objective (2) will show further GHG mitigation potential due to minimizing N use and therefore fertilizer transport and soil related GHG emission while potentially increasing biodiesel production per hectare. Regional based incentives may allow more timely achievement of goals for bio-based fuels production. Additionally, incentives may further increase GHG offsetting by promoting nitrogen conserving best management practices implementation. This research highlights the need for regional assessment/incentive based strategies for maximizing GHG mitigation potential of biofuel feedstocks.
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
7 CFR 1435.310 - Sharing processors' allocations with producers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) When a test rule or subsequent Federal Register notice pertaining to a test rule expressly obligates processors as well as manufacturers to assume direct testing and data reimbursement responsibilities. (2... processors voluntarily agree to reimburse manufacturers for a portion of test costs. Only those processors...
Atac, R.; Fischler, M.S.; Husby, D.E.
1991-01-15
A bus switching apparatus and method for multiple processor computer systems comprises a plurality of bus switches interconnected by branch buses. Each processor or other module of the system is connected to a spigot of a bus switch. Each bus switch also serves as part of a backplane of a modular crate hardware package. A processor initiates communication with another processor by identifying that other processor. The bus switch to which the initiating processor is connected identifies and secures, if possible, a path to that other processor, either directly or via one or more other bus switches which operate similarly. If a particular desired path through a given bus switch is not available to be used, an alternate path is considered, identified and secured. 11 figures.
Chatterjee, Siddhartha [Yorktown Heights, NY; Gunnels, John A [Brewster, NY
2011-11-08
A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes designating a distribution of elements of at least a portion of the array to be executed by specific processors in the multi-dimensional mesh of parallel processors. The pattern of the designating includes a cyclical repetitive pattern of the parallel processor mesh, as modified to have a skew in at least one dimension so that both a row of data in the array and a column of data in the array map to respective contiguous groupings of the processors such that a dimension of the contiguous groupings is greater than one.
Atac, Robert; Fischler, Mark S.; Husby, Donald E.
1991-01-01
A bus switching apparatus and method for multiple processor computer systems comprises a plurality of bus switches interconnected by branch buses. Each processor or other module of the system is connected to a spigot of a bus switch. Each bus switch also serves as part of a backplane of a modular crate hardware package. A processor initiates communication with another processor by identifying that other processor. The bus switch to which the initiating processor is connected identifies and secures, if possible, a path to that other processor, either directly or via one or more other bus switches which operate similarly. If a particular desired path through a given bus switch is not available to be used, an alternate path is considered, identified and secured.
Disaggregating tree and grass phenology in tropical savannas
NASA Astrophysics Data System (ADS)
Zhou, Qiang
Savannas are mixed tree-grass systems and as one of the world's largest biomes represent an important component of the Earth system affecting water and energy balances, carbon sequestration and biodiversity as well as supporting large human populations. Savanna vegetation structure and its distribution, however, may change because of major anthropogenic disturbances from climate change, wildfire, agriculture, and livestock production. The overstory and understory may have different water use strategies, different nutrient requirements and have different responses to fire and climate variation. The accurate measurement of the spatial distribution and structure of the overstory and understory are essential for understanding the savanna ecosystem. This project developed a workflow for separating the dynamics of the overstory and understory fractional cover in savannas at the continental scale (Australia, South America, and Africa). Previous studies have successfully separated the phenology of Australian savanna vegetation into persistent and seasonal greenness using time series decomposition, and into fractions of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV) and bare soil (BS) using linear unmixing. This study combined these methods to separate the understory and overstory signal in both the green and senescent phenological stages using remotely sensed imagery from the MODIS (MODerate resolution Imaging Spectroradiometer) sensor. The methods and parameters were adjusted based on the vegetation variation. The workflow was first tested at the Australian site. Here the PV estimates for overstory and understory showed best performance, however NPV estimates exhibited spatial variation in validation relationships. At the South American site (Cerrado), an additional method based on frequency unmixing was developed to separate green vegetation components with similar phenology. When the decomposition and frequency methods were compared, the frequency method was better for extracting the green tree phenology, but the original decomposition method was better for retrieval of understory grass phenology. Both methods, however, were less accurate than in the Cerrado than in Australia due to intermingling and intergrading of grass and small woody components. Since African savanna trees are predominantly deciduous, the frequency method was combined with the linear unmixing of fractional cover to attempt to separate the relatively similar phenology of deciduous trees and seasonal grasses. The results for Africa revealed limitations associated with both methods. There was spatial and seasonal variation in the spectral indices used to unmix fractional cover resulting in poor validation for NPV in particular. The frequency analysis revealed significant phase variation indicative of different phenology, but these could not be clearly ascribed to separate grass and tree components. Overall findings indicate that site-specific variation and vegetation structure and composition, along with MODIS pixel resolution, and the simple vegetation index approach used was not robust across the different savanna biomes. The approach showed generally better performance for estimating PV fraction, and separating green phenology, but there were major inconsistencies, errors and biases in estimation of NPV and BS outside of the Australian savanna environment.
Variable word length encoder reduces TV bandwith requirements
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1965-01-01
Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.
Accelerating molecular dynamic simulation on the cell processor and Playstation 3.
Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S
2009-01-30
Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
Communications systems and methods for subsea processors
Gutierrez, Jose; Pereira, Luis
2016-04-26
A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
Experimental testing of the noise-canceling processor.
Collins, Michael D; Baer, Ralph N; Simpson, Harry J
2011-09-01
Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America
A High Performance VLSI Computer Architecture For Computer Graphics
NASA Astrophysics Data System (ADS)
Chin, Chi-Yuan; Lin, Wen-Tai
1988-10-01
A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.
Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA
NASA Astrophysics Data System (ADS)
Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei
2013-03-01
With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.
NASA Astrophysics Data System (ADS)
Weber, Walter H.; Mair, H. Douglas; Jansen, Dion
2003-03-01
A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.
Array processor architecture connection network
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1982-01-01
A connection network is disclosed for use between a parallel array of processors and a parallel array of memory modules for establishing non-conflicting data communications paths between requested memory modules and requesting processors. The connection network includes a plurality of switching elements interposed between the processor array and the memory modules array in an Omega networking architecture. Each switching element includes a first and a second processor side port, a first and a second memory module side port, and control logic circuitry for providing data connections between the first and second processor ports and the first and second memory module ports. The control logic circuitry includes strobe logic for examining data arriving at the first and the second processor ports to indicate when the data arriving is requesting data from a requesting processor to a requested memory module. Further, connection circuitry is associated with the strobe logic for examining requesting data arriving at the first and the second processor ports for providing a data connection therefrom to the first and the second memory module ports in response thereto when the data connection so provided does not conflict with a pre-established data connection currently in use.
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 9 2013-01-01 2013-01-01 false Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 9 2012-01-01 2012-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 9 2014-01-01 2013-01-01 true Fluid milk processor. 1160.108 Section 1160.108... AGREEMENTS AND ORDERS; MILK), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
21 CFR 892.1900 - Automatic radiographic film processor.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
7 CFR 1160.108 - Fluid milk processor.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 9 2011-01-01 2011-01-01 false Fluid milk processor. 1160.108 Section 1160.108... Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE FLUID MILK PROMOTION PROGRAM Fluid Milk Promotion Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who...
Shared performance monitor in a multiprocessor system
Chiu, George; Gara, Alan G; Salapura, Valentina
2014-12-02
A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP
NASA Astrophysics Data System (ADS)
Brooks, Geoffrey W.
1996-03-01
Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.
77 FR 124 - Biological Processors of Alabama; Decatur, Morgan County, AL; Notice of Settlement
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... ENVIRONMENTAL PROTECTION AGENCY [FRL-9612-9] Biological Processors of Alabama; Decatur, Morgan... reimbursement of past response costs concerning the Biological Processors of Alabama Superfund Site located in... Ms. Paula V. Painter. Submit your comments by Site name Biological Processors of Alabama Superfund...
Puri, S; Singh, A; Yashik
2010-01-01
Globalisation has given birth to medical tourism. Health and medical tourism are the fastest growing segments in not only developed nations but in developing countries too. India has become a hot destination, as the Indian medical standards match up to the highly prescribed international standards at a very low cost. However, it is an unmixed blessing; along with advantages, it has many unintended side effects also. PMID:23113017
A. M. S. Smith; L. B. Lenilte; A. T. Hudak; P. Morgan
2007-01-01
The Differenced Normalized Burn Ratio (deltaNBR) is widely used to map post-fire effects in North America from multispectral satellite imagery, but has not been rigorously validated across the great diversity in vegetation types. The importance of these maps to fire rehabilitation crews highlights the need for continued assessment of alternative remote sensing...
Unmixing the Materials and Mechanics Contributions in Non-resolved Object Signatures
2008-09-01
abundances from hyperspectral or multi-spectral time - resolved signatures. A Fourier analysis of temporal variation of material abundance provides...factorization technique to extract the temporal variation of material abundances from hyperspectral or multi-spectral time - resolved signatures. A Fourier...approximately one hundred wavelengths in the visible spectrum. The frame rate for the instrument was not large enough to collect time resolved data. However
Evaluation of algorithm methods for fluorescence spectra of cancerous and normal human tissues
NASA Astrophysics Data System (ADS)
Pu, Yang; Wang, Wubao; Alfano, Robert R.
2016-03-01
The paper focus on the various algorithms on to unravel the fluorescence spectra by unmixing methods to identify cancerous and normal human tissues from the measured fluorescence spectroscopy. The biochemical or morphologic changes that cause fluorescence spectra variations would appear earlier than the histological approach; therefore, fluorescence spectroscopy holds a great promise as clinical tool for diagnosing early stage of carcinomas and other deceases for in vivo use. The method can further identify tissue biomarkers by decomposing the spectral contributions of different fluorescent molecules of interest. In this work, we investigate the performance of blind source un-mixing methods (backward model) and spectral fitting approaches (forward model) in decomposing the contributions of key fluorescent molecules from the tissue mixture background when certain selected excitation wavelength is applied. Pairs of adenocarcinoma as well as normal tissues confirmed by pathologist were excited by selective wavelength of 340 nm. The emission spectra of resected fresh tissue were used to evaluate the relative changes of collagen, reduced nicotinamide adenine dinucleotide (NADH), and Flavin by various spectral un-mixing methods. Two categories of algorithms: forward methods and Blind Source Separation [such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA), and Nonnegative Matrix Factorization (NMF)] will be introduced and evaluated. The purpose of the spectral analysis is to discard the redundant information which conceals the difference between these two types of tissues, but keep their diagnostically significance. The facts predicted by different methods were compared to the gold standard of histopathology. The results indicate that these key fluorophores within tissue, e.g. tryptophan, collagen, and NADH, and flavin, show differences of relative contents of fluorophores among different types of human cancer and normal tissues. The sensitivity, specificity, and receiver operating characteristic (ROC) are finally employed as the criteria to evaluate the efficacy of these methods in cancer detection. The underlying physical and biological basis for these optical approaches will be discussed with examples. This ex vivo preliminary trial demonstrates that these different criteria from different methods can distinguish carcinoma from normal tissues with good sensitivity and specificity while among them, we found that ICA appears to be the superior method in predication accuracy.
NASA Astrophysics Data System (ADS)
Leverington, D. W.
2008-12-01
The use of remote-sensing techniques in the discrimination of rock and soil classes in northern regions can help support a diverse range of activities including environmental characterization, mineral exploration, and the study of Quaternary paleoenvironments. Images of low spectral resolution can commonly be used in the mapping of lithological classes possessing distinct spectral characteristics, but hyperspectral databases offer greater potential for discrimination of materials distinguished by more subtle reflectance properties. Orbiting sensors offer an especially flexible and cost-effective means for acquisition of data to workers unable to conduct airborne surveys. In an effort to better constrain the utility of hyperspectral datasets in northern research, this study undertook to investigate the effectiveness of EO-1 Hyperion data in the discrimination and mapping of surface classes at a study area on Melville Island, Nunavut. Bedrock units in the immediate study area consist of late-Paleozoic clastic and carbonate sequences of the Sverdrup Basin. Weathered and frost-shattered felsenmeer, predominantly taking the form of boulder- to pebble-sized clasts that have accumulated in place and that mantle parent bedrock units, is the most common surface material in the study area. Hyperion data were converted from at-sensor radiance to reflectance, and were then linearly unmixed on the basis of end-member spectra measured from field samples. Hyperion unmixing results effectively portray the general fractional cover of six end members, although the fraction images of several materials contain background values that in some areas overestimate surface exposure. The best separated end members include the snow, green vegetation, and red-weathering sandstone classes, whereas the classes most negatively affected by elevated fraction values include the mudstone, limestone, and 'other' sandstone classes. Local overestimates of fractional cover are likely related to the shared lithological and weathering characteristics of several clastic and carbonate units, and may also be related to the lower radiometric precision characteristic of Hyperion data. Despite these issues, the databases generated in this study successfully provide useful complementary information to that provided by maps of local bedrock geology.
Multiple core computer processor with globally-accessible local memories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalf, John; Donofrio, David; Oliker, Leonid
A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
Parallel processor-based raster graphics system architecture
Littlefield, Richard J.
1990-01-01
An apparatus for generating raster graphics images from the graphics command stream includes a plurality of graphics processors connected in parallel, each adapted to receive any part of the graphics command stream for processing the command stream part into pixel data. The apparatus also includes a frame buffer for mapping the pixel data to pixel locations and an interconnection network for interconnecting the graphics processors to the frame buffer. Through the interconnection network, each graphics processor may access any part of the frame buffer concurrently with another graphics processor accessing any other part of the frame buffer. The plurality of graphics processors can thereby transmit concurrently pixel data to pixel locations in the frame buffer.
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Bagnato, Giuseppe; Iulianelli, Adolfo; Sanna, Aimaro; Basile, Angelo
2017-03-23
Glycerol represents an emerging renewable bio-derived feedstock, which could be used as a source for producing hydrogen through steam reforming reaction. In this review, the state-of-the-art about glycerol production processes is reviewed, with particular focus on glycerol reforming reactions and on the main catalysts under development. Furthermore, the use of membrane catalytic reactors instead of conventional reactors for steam reforming is discussed. Finally, the review describes the utilization of the Pd-based membrane reactor technology, pointing out the ability of these alternative fuel processors to simultaneously extract high purity hydrogen and enhance the whole performances of the reaction system in terms of glycerol conversion and hydrogen yield.
Bagnato, Giuseppe; Iulianelli, Adolfo; Sanna, Aimaro; Basile, Angelo
2017-01-01
Glycerol represents an emerging renewable bio-derived feedstock, which could be used as a source for producing hydrogen through steam reforming reaction. In this review, the state-of-the-art about glycerol production processes is reviewed, with particular focus on glycerol reforming reactions and on the main catalysts under development. Furthermore, the use of membrane catalytic reactors instead of conventional reactors for steam reforming is discussed. Finally, the review describes the utilization of the Pd-based membrane reactor technology, pointing out the ability of these alternative fuel processors to simultaneously extract high purity hydrogen and enhance the whole performances of the reaction system in terms of glycerol conversion and hydrogen yield. PMID:28333121
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
Extended performance electric propulsion power processor design study. Volume 2: Technical summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Sequence information signal processor
Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.
1999-01-01
An electronic circuit is used to compare two sequences, such as genetic sequences, to determine which alignment of the sequences produces the greatest similarity. The circuit includes a linear array of series-connected processors, each of which stores a single element from one of the sequences and compares that element with each successive element in the other sequence. For each comparison, the processor generates a scoring parameter that indicates which segment ending at those two elements produces the greatest degree of similarity between the sequences. The processor uses the scoring parameter to generate a similar scoring parameter for a comparison between the stored element and the next successive element from the other sequence. The processor also delivers the scoring parameter to the next processor in the array for use in generating a similar scoring parameter for another pair of elements. The electronic circuit determines which processor and alignment of the sequences produce the scoring parameter with the highest value.
Conditional load and store in a shared memory
Blumrich, Matthias A; Ohmacht, Martin
2015-02-03
A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memory cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.
Code of Federal Regulations, 2011 CFR
2011-04-01
... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...
Code of Federal Regulations, 2010 CFR
2010-04-01
... information processors: form of application and amendments. 242.609 Section 242.609 Commodity and Securities....609 Registration of securities information processors: form of application and amendments. (a) An application for the registration of a securities information processor shall be filed on Form SIP (§ 249.1001...
Optical Associative Processors For Visual Perception"
NASA Astrophysics Data System (ADS)
Casasent, David; Telfer, Brian
1988-05-01
We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.
Enabling Future Robotic Missions with Multicore Processors
NASA Technical Reports Server (NTRS)
Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.
2011-01-01
Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Y.K.; Chen, H.T.; Helm, R.W.
1980-01-01
A biomass allocation model has been developed to show the most profitable combination of biomass feedstocks thermochemical conversion processes, and fuel products to serve the seasonal conditions in a regional market. This optimization model provides a tool for quickly calculating the most profitable biomass missions from a large number of potential biomass missions. Other components of the system serve as a convenient storage and retrieval mechanism for biomass marketing and thermochemical conversion processing data. The system can be accessed through the use of a computer terminal, or it could be adapted to a portable micro-processor. A User's Manual for themore » system has been included in Appendix A of the report. The validity of any biomass allocation solution provided by the allocation model is dependent on the accuracy of the data base. The initial data base was constructed from values obtained from the literature, and, consequently, as more current thermochemical conversion processing and manufacturing costs and efficiencies become available, the data base should be revised. Biomass derived fuels included in the data base are the following: medium Btu gas low Btu gas, substitute natural gas, ammonia, methanol, electricity, gasoline, and fuel oil. The market sectors served by the fuels include: residential, electric utility, chemical (industrial), and transportation. Regional/seasonal costs and availabilities and heating values for 61 woody and non-woody biomass species are included. The study has included four regions in the United States which were selected because there was both an availability of biomass and a commercial demand for the derived fuels: Region I: NY, WV, PA; Region II: GA, AL, MS; Region III: IN, IL, IA; and Region IV: OR, WA.« less
NASA Astrophysics Data System (ADS)
Campanari, Stefano; Manzolini, Giampaolo; Garcia de la Iglesia, Fernando
This work presents a study of the energy and environmental balances for electric vehicles using batteries or fuel cells, through the methodology of the well to wheel (WTW) analysis, applied to ECE-EUDC driving cycle simulations. Well to wheel balances are carried out considering different scenarios for the primary energy supply. The fuel cell electric vehicles (FCEV) are based on the polymer electrolyte membrane (PEM) technology, and it is discussed the possibility to feed the fuel cell with (i) hydrogen directly stored onboard and generated separately by water hydrolysis (using renewable energy sources) or by conversion processes using coal or natural gas as primary energy source (through gasification or reforming), (ii) hydrogen generated onboard with a fuel processor fed by natural gas, ethanol, methanol or gasoline. The battery electric vehicles (BEV) are based on Li-ion batteries charged with electricity generated by central power stations, either based on renewable energy, coal, natural gas or reflecting the average EU power generation feedstock. A further alternative is considered: the integration of a small battery to FCEV, exploiting a hybrid solution that allows recovering energy during decelerations and substantially improves the system energy efficiency. After a preliminary WTW analysis carried out under nominal operating conditions, the work discusses the simulation of the vehicles energy consumption when following standardized ECE-EUDC driving cycle. The analysis is carried out considering different hypothesis about the vehicle driving range, the maximum speed requirements and the possibility to sustain more aggressive driving cycles. The analysis shows interesting conclusions, with best results achieved by BEVs only for very limited driving range requirements, while the fuel cell solutions yield best performances for more extended driving ranges where the battery weight becomes too high. Results are finally compared to those of conventional internal combustion engine vehicles, showing the potential advantages of the different solutions considered in the paper and indicating the possibility to reach the target of zero-emission vehicles (ZEV).
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
NASA Astrophysics Data System (ADS)
Nguyen, Gia Luong Huu
Fuel cells can produce electricity with high efficiency, low pollutants, and low noise. With the advent of fuel cell technologies, fuel cell systems have since been demonstrated as reliable power generators with power outputs from a few watts to a few megawatts. With proper equipment, fuel cell systems can produce heating and cooling, thus increased its overall efficiency. To increase the acceptance from electrical utilities and building owners, fuel cell systems must operate more dynamically and integrate well with renewable energy resources. This research studies the dynamic performance of fuel cells and the integration of fuel cells with other equipment in three levels: (i) the fuel cell stack operating on hydrogen and reformate gases, (ii) the fuel cell system consisting of a fuel reformer, a fuel cell stack, and a heat recovery unit, and (iii) the hybrid energy system consisting of photovoltaic panels, fuel cell system, and energy storage. In the first part, this research studied the steady-state and dynamic performance of a high temperature PEM fuel cell stack. Collaborators at Aalborg University (Aalborg, Denmark) conducted experiments on a high temperature PEM fuel cell short stack at steady-state and transients. Along with the experimental activities, this research developed a first-principles dynamic model of a fuel cell stack. The dynamic model developed in this research was compared to the experimental results when operating on different reformate concentrations. Finally, the dynamic performance of the fuel cell stack for a rapid increase and rapid decrease in power was evaluated. The dynamic model well predicted the performance of the well-performing cells in the experimental fuel cell stack. The second part of the research studied the dynamic response of a high temperature PEM fuel cell system consisting of a fuel reformer, a fuel cell stack, and a heat recovery unit with high thermal integration. After verifying the model performance with the obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.
On the relationship between parallel computation and graph embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, A.K.
1989-01-01
The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less
NASA Astrophysics Data System (ADS)
Guo, Baoshan; Lei, Cheng; Ito, Takuro; Yaxiaer, Yalikun; Kobayashi, Hirofumi; Jiang, Yiyue; Tanaka, Yo; Ozeki, Yasuyuki; Goda, Keisuke
2017-02-01
The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, microalgal biofuel is expected to play a key role in reducing the detrimental effects of global warming since microalgae absorb atmospheric CO2 via photosynthesis. Unfortunately, conventional analytical methods only provide population-averaged lipid contents and fail to characterize a diverse population of microalgal cells with single-cell resolution in a noninvasive and interference-free manner. Here we demonstrate high-throughput label-free single-cell screening of lipid-producing microalgal cells with optofluidic time-stretch quantitative phase microscopy. In particular, we use Euglena gracilis - an attractive microalgal species that produces wax esters (suitable for biodiesel and aviation fuel after refinement) within lipid droplets. Our optofluidic time-stretch quantitative phase microscope is based on an integration of a hydrodynamic-focusing microfluidic chip, an optical time-stretch phase-contrast microscope, and a digital image processor equipped with machine learning. As a result, it provides both the opacity and phase contents of every single cell at a high throughput of 10,000 cells/s. We characterize heterogeneous populations of E. gracilis cells under two different culture conditions to evaluate their lipid production efficiency. Our method holds promise as an effective analytical tool for microalgaebased biofuel production.
NASA Technical Reports Server (NTRS)
Chatfield, Robert B.; Andreae, Meinrat O.
2016-01-01
Previous studies of emission factors from biomass burning are prone to large errors since they ignore the interplay of mixing and varying pre-fire background CO2 levels. Such complications severely affected our studies of 446 forest fire plume samples measured in the Western US by the science teams of NASA's SEAC4RS and ARCTAS airborne missions. Consequently we propose a Mixed Effects Regression Emission Technique (MERET) to check techniques like the Normalized Emission Ratio Method (NERM), where use of sequential observations cannot disentangle emissions and mixing. We also evaluate a simpler "consensus" technique. All techniques relate emissions to fuel burned using C(burn) = delta C(tot) added to the fire plume, where C(tot) approximately equals (CO2 = CO). Mixed-effects regression can estimate pre-fire background values of C(tot) (indexed by observation j) simultaneously with emissions factors indexed by individual species i, delta, epsilon lambda tau alpha-x(sub I)/C(sub burn))I,j. MERET and "consensus" require more than emissions indicators. Our studies excluded samples where exogenous CO or CH4 might have been fed into a fire plume, mimicking emission. We sought to let the data on 13 gases and particulate properties suggest clusters of variables and plume types, using non-negative matrix factorization (NMF). While samples were mixtures, the NMF unmixing suggested purer burn types. Particulate properties (b scant, b abs, SSA, AAE) and gas-phase emissions were interrelated. Finally, we sought a simple categorization useful for modeling ozone production in plumes. Two kinds of fires produced high ozone: those with large fuel nitrogen as evidenced by remnant CH3CN in the plumes, and also those from very intense large burns. Fire types with optimal ratios of delta-NOy/delta- HCHO associate with the highest additional ozone per unit Cburn, Perhaps these plumes exhibit limited NOx binding to reactive organics. Perhaps these plumes exhibit limited NOx binding to reactive organics
NASA Technical Reports Server (NTRS)
Chatfield, Robert B.; Andreae, Meinrat O.
2015-01-01
Previous studies of emission factors from biomass burning are prone to large errors since they ignore the interplay of mixing and varying pre-fire background CO2 levels. Such complications severely affected our studies of 446 forest fire plume samples measured in the Western US by the science teams of NASA's SEAC4RS and ARCTAS airborne missions. Consequently we propose a Mixed Effects Regression Emission Technique (MERET) to check techniques like the Normalized Emission Ratio Method (NERM), where use of sequential observations cannot disentangle emissions and mixing. We also evaluate a simpler "consensus" technique. All techniques relate emissions to fuel burned using C(sub burn) = delta C(sub tot) added to the fire plume, where C(sub tot) approximately equals (CO2 + CO). Mixed-effects regression can estimate pre-fire background values of Ctot (indexed by observation j) simultaneously with emissions factors indexed by individual species i, delta epsilon lambda tau alpha-x(sub i)/(C(sub burn))i,j., MERET and "consensus" require more than two emissions indicators. Our studies excluded samples where exogenous CO or CH4 might have been fed into a fire plume, mimicking emission. We sought to let the data on 13 gases and particulate properties suggest clusters of variables and plume types, using non-negative matrix factorization (NMF). While samples were mixtures, the NMF unmixing suggested purer burn types. Particulate properties (bscat, babs, SSA, AAE) and gas-phase emissions were interrelated. Finally, we sought a simple categorization useful for modeling ozone production in plumes. Two kinds of fires produced high ozone: those with large fuel nitrogen as evidenced by remnant CH3CN in the plumes, and also those from very intense large burns. Fire types with optimal ratios of delta-NOy/delta- HCHO associate with the highest additional ozone per unit Cburn, Perhaps these plumes exhibit limited NOx binding to reactive organics. Perhaps these plumes exhibit limited NOx binding to reactive organics.
Code of Federal Regulations, 2011 CFR
2011-04-01
... registration as a securities information processor or to amend such an application or registration. 249.1001..., SECURITIES EXCHANGE ACT OF 1934 Form for Registration of, and Reporting by Securities Information Processors § 249.1001 Form SIP, for application for registration as a securities information processor or to amend...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
... Fisheries Act (AFA) trawl catcher/processor sector (otherwise known as the Amendment 80 sector... catcher/processors. Hook-and-line catcher/processors are allocated 48.7 percent of the annual BSAI Pacific... harvest of Pacific cod by hook-and-line catcher/processors, although this is one of the major groundfish...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
... the Securities Information Processors (``SIPs'' or ``Processors'') responsible for consolidation of... Plan. \\9\\ 17 CFR 242.603(b). The Plan refers to this entity as the Processor. \\10\\ See Section I(T) of... Euronext, to Elizabeth M. Murphy, Secretary, Commission, dated May 24, 2012. The Processors would also...
Simulating Synchronous Processors
1988-06-01
34f Fvtvru m LABORATORY FOR INMASSACHUSETTSFCOMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-359 SIMULATING SYNCHRONOUS PROCESSORS Jennifer Lundelius Welch...PROJECT TASK WORK UNIT Arlington, VA 22217 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE Include Security Classification) Simulating Synchronous Processors...necessary and identify by block number) In this paper we show how a distributed system with synchronous processors and asynchro- nous message delays can
Middle School Pupil Writing and the Word Processor.
ERIC Educational Resources Information Center
Ediger, Marlow
Pupils in middle schools should have ample opportunities to write with the use of word processors. Legible writing in longhand will always be necessary in selected situations but, nevertheless, much drudgery is taken care of when using a word processor. Word processors tend to be very user friendly in that few mechanical skills are needed by the…
Code of Federal Regulations, 2010 CFR
2010-04-01
... registration as a securities information processor or to amend such an application or registration. 249.1001..., SECURITIES EXCHANGE ACT OF 1934 Form for Registration of, and Reporting by Securities Information Processors § 249.1001 Form SIP, for application for registration as a securities information processor or to amend...
Analog Processor To Solve Optimization Problems
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.
1993-01-01
Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Sobol, Wlad T
2002-01-01
A simple kinetic model that describes the time evolution of the chemical concentration of an arbitrary compound within the tank of an automatic film processor is presented. It provides insights into the kinetics of chemistry concentration inside the processor's tank; the results facilitate the tasks of processor tuning and quality control (QC). The model has successfully been used in several troubleshooting sessions of low-volume mammography processors for which maintaining consistent QC tracking was difficult due to fluctuations of bromide levels in the developer tank.
Multithreading in vector processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi
In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
A fully reconfigurable photonic integrated signal processor
NASA Astrophysics Data System (ADS)
Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping
2016-03-01
Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
When emotionality trumps reason: a study of individual processing style and juror bias.
Gunnell, Justin J; Ceci, Stephen J
2010-01-01
"Cognitive Experiential Self Theory" (CEST) postulates that information-processing proceeds through two pathways, a rational one and an experiential one. The former is characterized by an emphasis on analysis, fact, and logical argument, whereas the latter is characterized by emotional and personal experience. We examined whether individuals influenced by the experiential system (E-processors) are more susceptible to extralegal biases (e.g. defendant attractiveness) than those influenced by the rational system (R-processors). Participants reviewed a criminal trial transcript and defendant profile and determined verdict, sentencing, and extralegal susceptibility. Although E-processors and R-processors convicted attractive defendants at similar rates, E-processors were more likely to convict less attractive defendants. Whereas R-processors did not sentence attractive and less attractive defendants differently, E-processors gave more lenient sentences to attractive defendants and harsher sentences to less attractive defendants. E-processors were also more likely to report that extralegal factors would change their verdicts. Further, the degree to which emotionality trumped rationality within an individual, as measured by a novel scoring method, linearly correlated with harsher sentences and extralegal influence. In sum, the results support an "unattractive harshness" effect during guilt determination, an attraction leniency effect during sentencing and increased susceptibility to extralegal factors within E-processors. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-10
...; catcher/ processor--40 percent; and motherships--10 percent. Under Sec. 679.20(a)(5)(iii)(B)(2)(i) and (ii... sector, 40 percent to the catcher/processor sector, and 10 percent to the mothership sector. In the.../processor sector will be available for harvest by AFA catcher vessels with catcher/ processor sector...
Processor architecture for airborne SAR systems
NASA Technical Reports Server (NTRS)
Glass, C. M.
1983-01-01
Digital processors for spaceborne imaging radars and application of the technology developed for airborne SAR systems are considered. Transferring algorithms and implementation techniques from airborne to spaceborne SAR processors offers obvious advantages. The following topics are discussed: (1) a quantification of the differences in processing algorithms for airborne and spaceborne SARs; and (2) an overview of three processors for airborne SAR systems.
Moham P. Tiruveedhula; Joseph Fan; Ravi R. Sadasivuni; Surya S. Durbha; David L. Evans
2010-01-01
The accumulation of small diameter trees (SDTs) is becoming a nationwide concern. Forest management practices such as fire suppression and selective cutting of high grade timber have contributed to an overabundance of SDTs in many areas. Alternative value-added utilization of SDTs (for composite wood products and biofuels) has prompted the need to estimate their...
Spectral Unmixing Applied to Desert Soils for the Detection of Sub-Pixel Disturbances
2012-09-01
and Glazner, 1997). Rocks underlying Panum Crater consist of the granitic and metamorphic batholith associated with the Sierra Nevada. On top of this...of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September...technology can be used to detect and characterize surface disturbance both literally (visually) and non-literally (analytically). Non-literal approaches
Mineral and Lithology Mapping of Drill Core Pulps Using Visible and Infrared Spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, G. R., E-mail: G.Taylor@unsw.edu.au
2000-12-15
A novel approach for using field spectrometry for determining both the mineralogy and the lithology of drill core pulps (powders) is developed and evaluated. The methodology is developed using material from a single drillhole through a mineralized sequence of rocks from central New South Wales. Mineral library spectra are used in linear unmixing routines to determine the mineral abundances in drill core pulps that represent between 1 m and 3 m of core. Comparison with X-Ray Diffraction (XRD) analyses shows that for most major constituents, spectrometry provides an estimate of quantitative mineralogy that is as reliable as that provided bymore » XRD. Confusion between the absorption features of calcite and those of chlorite causes the calcite contents determined by spectrometry to be unreliable. Convex geometry is used to recognize the spectra of those samples that are extreme and are representative of unique lithologies. Linear unmixing is used to determine the abundance of these lithologies in each drillhole sample and these abundances are used to interpret the geology of the drillhole. The interpreted geology agrees well with conventional drillhole logs of the visible geology and photographs of the split core. The methods developed provide a quick and cost-effective way of determining the lithology and alteration mineralogy of drill core pulps.« less
Huang, Kuixian; Luo, Xingzhang
2018-01-01
The purpose of this study is to recognize the contamination characteristics of trace metals in soils and apportion their potential sources in Northern China to provide a scientific basis for basic of soil environment management and pollution control. The data set of metals for 12 elements in surface soil samples was collected. The enrichment factor and geoaccumulation index were used to identify the general geochemical characteristics of trace metals in soils. The UNMIX and positive matrix factorizations (PMF) models were comparatively applied to apportion their potential sources. Furthermore, geostatistical tools were used to study the spatial distribution of pollution characteristics and to identify the affected regions of sources that were derived from apportionment models. The soils were contaminated by Cd, Hg, Pb and Zn to varying degree. Industrial activities, agricultural activities and natural sources were identified as the potential sources determining the contents of trace metals in soils with contributions of 24.8%–24.9%, 33.3%–37.2% and 38.0%–41.8%, respectively. The slightly different results obtained from UNMIX and PMF might be caused by the estimations of uncertainty and different algorithms within the models. PMID:29474412
The Yearly Variation in Fall-Winter Arctic Winter Vortex Descent
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Newman, Paul A.
1999-01-01
Using the change in HALOE methane profiles from early September to late March, we have estimated the minimum amount of diabatic descent within the polar which takes place during Arctic winter. The year to year variations are a result in the year to year variations in stratospheric wave activity which (1) modify the temperature of the vortex and thus the cooling rate; (2) reduce the apparent descent by mixing high amounts of methane into the vortex. The peak descent amounts from HALOE methane vary from l0km -14km near the arrival altitude of 25 km. Using a diabatic trajectory calculation, we compare forward and backward trajectories over the course of the winter using UKMO assimilated stratospheric data. The forward calculation agrees fairly well with the observed descent. The backward calculation appears to be unable to produce the observed amount of descent, but this is only an apparent effect due to the density decrease in parcels with altitude. Finally we show the results for unmixed descent experiments - where the parcels are fixed in latitude and longitude and allowed to descend based on the local cooling rate. Unmixed descent is found to always exceed mixed descent, because when normal parcel motion is included, the path average cooling is always less than the cooling at a fixed polar point.
NASA Technical Reports Server (NTRS)
Lederer, Susan
2017-01-01
NASA's ODPO has recently collected data of unresolved objects at GEO with the 3.8m UKIRT infrared telescope on Mauna Kea and the 1.3m MCAT visible telescope on Ascension Island. Analyses of SWIR data of rocket bodies and HS-376 solar-panel covered buses demonstrate the uniqueness of spectral signatures. Data of 3 classes of rocket bodies show similarities amongst a given class, but distinct differences from one class to another, suggesting that infrared reflectance spectra could effectively be used toward characterizing and constraining potential parent bodies of uncorrelated targets (UCTs). The Optical Measurements Center (OMC) at NASA JSC is designed to collect photometric signatures in the laboratory that can be used for comparison with telescopic data. NASA also has a spectral database of spacecraft materials for use with spectral unmixing models. Spectral unmixing of the HS-376 bus data demonstrates how absorption features and slopes can be used to constrain material characteristics of debris. Broadband photometry likewise can be compared with MCAT data of non-resolved debris images. Similar studies have been applied to IDCSP satellites to demonstrate how color-color photometry can be compared with lab data to constrain bulk materials signatures of spacecraft and debris.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2016-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Unmixing of spectral components affecting AVIRIS imagery of Tampa Bay
NASA Astrophysics Data System (ADS)
Carder, Kendall L.; Lee, Z. P.; Chen, Robert F.; Davis, Curtiss O.
1993-09-01
According to Kirk's as well as Morel and Gentili's Monte Carlo simulations, the popular simple expression, R approximately equals 0.33 bb/a, relating subsurface irradiance reflectance (R) to the ratio of the backscattering coefficient (bb) to absorption coefficient (a), is not valid for bb/a > 0.25. This means that it may no longer be valid for values of remote-sensing reflectance (above-surface ratio of water-leaving radiance to downwelling irradiance) where Rrs4/ > 0.01. Since there has been no simple Rrs expression developed for very turbid waters, we developed one based in part on Monte Carlo simulations and empirical adjustments to an Rrs model and applied it to rather turbid coastal waters near Tampa Bay to evaluate its utility for unmixing the optical components affecting the water- leaving radiance. With the high spectral (10 nm) and spatial (20 m2) resolution of Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) data, the water depth and bottom type were deduced using the model for shallow waters. This research demonstrates the necessity of further research to improve interpretations of scenes with highly variable turbid waters, and it emphasizes the utility of high spectral-resolution data as from AVIRIS for better understanding complicated coastal environments such as the west Florida shelf.
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Yes! An object-oriented compiler compiler (YOOCC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avotins, J.; Mingins, C.; Schmidt, H.
1995-12-31
Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grapemore » and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.« less
Effect of poor control of film processors on mammographic image quality.
Kimme-Smith, C; Sun, H; Bassett, L W; Gold, R H
1992-11-01
With the increasingly stringent standards of image quality in mammography, film processor quality control is especially important. Current methods are not sufficient for ensuring good processing. The authors used a sensitometer and densitometer system to evaluate the performance of 22 processors at 16 mammographic facilities. Standard sensitometric values of two films were established, and processor performance was assessed for variations from these standards. Developer chemistry of each processor was analyzed and correlated with its sensitometric values. Ten processors were retested, and nine were found to be out of calibration. The developer components of hydroquinone, sulfites, bromide, and alkalinity varied the most, and low concentrations of hydroquinone were associated with lower average gradients at two facilities. Use of the sensitometer and densitometer system helps identify out-of-calibration processors, but further study is needed to correlate sensitometric values with developer component values. The authors believe that present quality control would be improved if sensitometric or other tests could be used to identify developer components that are out of calibration.
Automatic film processors' quality control test in Greek military hospitals.
Lymberis, C; Efstathopoulos, E P; Manetou, A; Poudridis, G
1993-04-01
The two major military radiology installations (Athens, Greece) using a total of 15 automatic film processors were assessed using the 21-step-wedge method. The results of quality control in all these processors are presented. The parameters measured under actual working conditions were base and fog, contrast and speed. Base and fog as well as speed displayed large variations with average values generally higher than acceptable, whilst contrast displayed greater stability. Developer temperature was measured daily during the test and was found to be outside the film manufacturers' recommended limits in nine of the 15 processors. In only one processor did film passing time vary on an every day basis and this was due to maloperation. Developer pH test was not part of the daily monitoring service being performed every 5 days for each film processor and found to be in the range 9-12; 10 of the 15 processors presented pH values outside the limits specified by the film manufacturers.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1983-01-01
A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.
Extended performance electric propulsion power processor design study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30cm ion thruster power processor with a beam supply rating of 2.2kW to 10kW. Extensions in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. Preliminary electrical design, mechanical design, and thermal analysis were performed on a 6kW power transformer for the beam supply. Bi-Mod mechanical, structural, and thermal control configurations were evaluated for the power processor, and preliminary estimates of mechanical weight were determined. A program development plan was formulated that outlines the work breakdown structure for the development, qualification and fabrication of the power processor flight hardware.
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
Efficient Interconnection Schemes for VLSI and Parallel Computation
1989-08-01
Definition: Let R be a routing network. A set S of wires in R is a (directed) cut if it partitions the network into two sets of processors A and B ...such that every path from a processor in A to a processor in B contains a wire in S. The capacity cap(S) is the number of wires in the cut. For a set of...messages M, define the load load(M, S) of M on a cut S to be the number of messages in M from a processor in A to a processor in B . The load factor
Hypercluster - Parallel processing for computational mechanics
NASA Technical Reports Server (NTRS)
Blech, Richard A.
1988-01-01
An account is given of the development status, performance capabilities and implications for further development of NASA-Lewis' testbed 'hypercluster' parallel computer network, in which multiple processors communicate through a shared memory. Processors have local as well as shared memory; the hypercluster is expanded in the same manner as the hypercube, with processor clusters replacing the normal single processor node. The NASA-Lewis machine has three nodes with a vector personality and one node with a scalar personality. Each of the vector nodes uses four board-level vector processors, while the scalar node uses four general-purpose microcomputer boards.
2015-06-13
The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A...Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanović University of California, Berkeley, California 94720...Order Machine BOOM is a synthesizable, parameterized, superscalar out- of-order RISC-V core designed to serve as the prototypical baseline processor
A Medical Language Processor for Two Indo-European Languages
Nhan, Ngo Thanh; Sager, Naomi; Lyman, Margaret; Tick, Leo J.; Borst, François; Su, Yun
1989-01-01
The syntax and semantics of clinical narrative across Indo-European languages are quite similar, making it possible to envison a single medical language processor that can be adapted for different European languages. The Linguistic String Project of New York University is continuing the development of its Medical Language Processor in this direction. The paper describes how the processor operates on English and French.
Performance Modeling of the ADA Rendezvous
1991-10-01
queueing network of figure 2, SERVERTASK can complete only one rendezvous at a time. Thus, the rate that the rendezvous requests are processed at the... Network 1, SERVERTASK competes with the traffic tasks of Server Processor. Each time SERVERTASK gains access to the processor, SERVERTASK completes...Client Processor Server Processor Software Server Nek Netork2 Figure 10. A conceptualization of the algorithm. The SERVERTASK software server of Network 2
A Parallel Algorithm for Contact in a Finite Element Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Timothy G.
A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the samemore » processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm has been implemented as the contact capability of the ALE3D multiphysics code, and is currently in production use.« less
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
NASA Astrophysics Data System (ADS)
OMEGA Science Team; Combe, J.-Ph.; Le Mouélic, S.; Sotin, C.; Gendrin, A.; Mustard, J. F.; Le Deit, L.; Launeau, P.; Bibring, J.-P.; Gondet, B.; Langevin, Y.; Pinet, P.; OMEGA Science Team
2008-05-01
The mineralogical composition of the Martian surface is investigated by a Multiple-Endmember Linear Spectral Unmixing Model (MELSUM) of the Observatoire pour la Minéralogie, l'Eau, les Glaces et l'Activité (OMEGA) imaging spectrometer onboard Mars Express. OMEGA has fully covered the surface of the red planet at medium to low resolution (2-4 km per pixel). Several areas have been imaged at a resolution up to 300 m per pixel. One difficulty in the data processing is to extract the mineralogical composition, since rocks are mixtures of several components. MELSUM is an algorithm that selects the best linear combination of spectra among the families of minerals available in a reference library. The best fit of the observed spectrum on each pixel is calculated by the same unmixing equation used in the classical Spectral Mixture Analysis (SMA). This study shows the importance of the choice of the input library, which contains in our case 24 laboratory spectra (endmembers) of minerals that cover the diversity of the mineral families that may be found on the Martian surface. The analysis is restricted to the 1.0-2.5 μm wavelength range. Grain size variations and atmospheric scattering by aerosols induce changes in overall albedo level and continuum slopes. Synthetic flat and pure slope spectra have therefore been included in the input mineral spectral endmembers library in order to take these effects into account. The selection process for the endmembers is a systematic exploration of whole set of combinations of four components plus the straight line spectra. When negative coefficients occur, the results are discarded. This strategy is successfully tested on the terrestrial Cuprite site (Nevada, USA), for which extensive ground observations exist. It is then applied to different areas on Mars including Syrtis Major, Aram Chaos and Olympia Undae near the North Polar Cap. MELSUM on Syrtis Major reveals a region dominated by mafic minerals, with the oldest crustal regions composed of a mixing between low-calcium pyroxenes (LCPs) (orthopyroxenes (OPx)) and high-calcium pyroxenes (HCPs) (clinopyroxenes (CPx)). The Syrtis volcanic edifice appears depleted in LCP (OPx) and enriched in HCP (CPx), which is consistent with materials produced with a lower partial fusion degree at an age younger to the surrounding crust. Strong olivine signatures are found between the two calderas Nili Patera and Meroe Patera and in Nili Fossae. A strong signature of iron oxides is found within Aram Chaos, with a spatial distribution also consistent with thermal emission spectrometer (TES). Gypsum is unambiguously detected in the northern polar region, in agreement with the study of Langevin et al. [2005. Sulfates in the north polar region of Mars detected by OMEGA/Mars Express. Science 307(5715), 1584-1586]. Our results show that the linear spectral unmixing provides good first order results in a variety of mineralogical contexts, and can therefore confidently be used on a wider scale to analyze the complete archive of OMEGA data.
Automobile Crash Sensor Signal Processor
DOT National Transportation Integrated Search
1973-11-01
The crash sensor signal processor described interfaces between an automobile-installed doppler radar and an air bag activating solenoid or equivalent electromechanical device. The processor utilizes both digital and analog techniques to produce an ou...
NASA Technical Reports Server (NTRS)
Srinivasan, J.; Farrington, A.; Gray, A.
2001-01-01
They present an overview of long-life reconfigurable processor technologies and of a specific architecture for implementing a software reconfigurable (software-defined) network processor for space applications.
Evaluating local indirect addressing in SIMD proc essors
NASA Technical Reports Server (NTRS)
Middleton, David; Tomboulian, Sherryl
1989-01-01
In the design of parallel computers, there exists a tradeoff between the number and power of individual processors. The single instruction stream, multiple data stream (SIMD) model of parallel computers lies at one extreme of the resulting spectrum. The available hardware resources are devoted to creating the largest possible number of processors, and consequently each individual processor must use the fewest possible resources. Disagreement exists as to whether SIMD processors should be able to generate addresses individually into their local data memory, or all processors should access the same address. The tradeoff is examined between the increased capability and the reduced number of processors that occurs in this single instruction stream, multiple, locally addressed, data (SIMLAD) model. Factors are assembled that affect this design choice, and the SIMLAD model is compared with the bare SIMD and the MIMD models.
WATERLOPP V2/64: A highly parallel machine for numerical computation
NASA Astrophysics Data System (ADS)
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
Database for LDV Signal Processor Performance Analysis
NASA Technical Reports Server (NTRS)
Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.
1989-01-01
A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.
The Use of a Microcomputer Based Array Processor for Real Time Laser Velocimeter Data Processing
NASA Technical Reports Server (NTRS)
Meyers, James F.
1990-01-01
The application of an array processor to laser velocimeter data processing is presented. The hardware is described along with the method of parallel programming required by the array processor. A portion of the data processing program is described in detail. The increase in computational speed of a microcomputer equipped with an array processor is illustrated by comparative testing with a minicomputer.
Contextual classification on a CDC Flexible Processor system. [for photomapped remote sensing data
NASA Technical Reports Server (NTRS)
Smith, B. W.; Siegel, H. J.; Swain, P. H.
1981-01-01
A potential hardware organization for the Flexible Processor Array is presented. An algorithm that implements a contextual classifier for remote sensing data analysis is given, along with uniprocessor classification algorithms. The Flexible Processor algorithm is provided, as are simulated timings for contextual classifiers run on the Flexible Processor Array and another system. The timings are analyzed for context neighborhoods of sizes three and nine.
Effect of processor temperature on film dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Shiv P.; Das, Indra J., E-mail: idas@iupui.edu
2012-07-01
Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d{sub max.}, 10 Multiplication-Sign 10 cm{sup 2}, 100 cm) to a given dose. Anmore » automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6 Degree-Sign C (85-105 Degree-Sign F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.« less
Cargo Movement Operations System (CMOS). Requirements Traceability Matrix Increment II
1990-05-17
NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ ] COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. C-i SS0-3 Change "workstation" to "processor". 2. C-2 SS0009 Change "workstation" to "processor". SS0016 3. C-6 SS0032 Change "workstation" to "processor". SS0035 4. C-9 SS0063 Add comma after "e.g." 5. C-i SS0082 Change "workstation" to "processor". 6. C-17 SS0131 Change "workstation" to "processor". SS0132 7. C-28 SS0242 Change "workstation"
A high performance linear equation solver on the VPP500 parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi
1994-12-31
This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.
Baseband processor development for the Advanced Communications Satellite Program
NASA Technical Reports Server (NTRS)
Moat, D.; Sabourin, D.; Stilwell, J.; Mccallister, R.; Borota, M.
1982-01-01
An onboard-baseband-processor concept for a satellite-switched time-division-multiple-access (SS-TDMA) communication system was developed for NASA Lewis Research Center. The baseband processor routes and controls traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband-processor design are being verified in a baseband-processor proof-of-concept model. These technology developments include serial MSK modems, Clos-type baseband routing switch, a single-chip CMOS maximum-likelihood convolutional decoder, and custom LSL implementation of high-speed, low-power ECL building blocks.
The software system development for the TAMU real-time fan beam scatterometer data processors
NASA Technical Reports Server (NTRS)
Clark, B. V.; Jean, B. R.
1980-01-01
A software package was designed and written to process in real-time any one quadrature channel pair of radar scatterometer signals form the NASA L- or C-Band radar scatterometer systems. The software was successfully tested in the C-Band processor breadboard hardware using recorded radar and NERDAS (NASA Earth Resources Data Annotation System) signals as the input data sources. The processor development program and the overall processor theory of operation and design are described. The real-time processor software system is documented and the results of the laboratory software tests, and recommendations for the efficient application of the data processing capabilities are presented.
A digital retina-like low-level vision processor.
Mertoguno, S; Bourbakis, N G
2003-01-01
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
Mechanically verified hardware implementing an 8-bit parallel IO Byzantine agreement processor
NASA Technical Reports Server (NTRS)
Moore, J. Strother
1992-01-01
Consider a network of four processors that use the Oral Messages (Byzantine Generals) Algorithm of Pease, Shostak, and Lamport to achieve agreement in the presence of faults. Bevier and Young have published a functional description of a single processor that, when interconnected appropriately with three identical others, implements this network under the assumption that the four processors step in synchrony. By formalizing the original Pease, et al work, Bevier and Young mechanically proved that such a network achieves fault tolerance. We develop, formalize, and discuss a hardware design that has been mechanically proven to implement their processor. In particular, we formally define mapping functions from the abstract state space of the Bevier-Young processor to a concrete state space of a hardware module and state a theorem that expresses the claim that the hardware correctly implements the processor. We briefly discuss the Brock-Hunt Formal Hardware Description Language which permits designs both to be proved correct with the Boyer-Moore theorem prover and to be expressed in a commercially supported hardware description language for additional electrical analysis and layout. We briefly describe our implementation.