Why we need a Vapor cycle?
We know that the Carnot cycle is most efficient cycle operating between two specified temperature limits. However; the Carnot cycle is not a suitable model for steam power cycle since:
- The turbine has to handle steam with low quality which will cause erosion and wear in turbine blades.
- It is impractical to design a compressor that handles two phase.
- It is difficult to control the condensation process that precisely as to end up with the desired at point 4.
Other issues include: isentropic compression to extremely high pressure and isothermal heat transfer at variable pressures. Thus, the Carnot cycle cannot be approximated in actual devices and is not a realistic model for vapor power cycles.
Ideal Rankine Cycle
The Rankine cycle is the ideal cycle for vapor power plants; it includes the following four reversible processes:
1-2: Isentropic compression: Water enters the pump as state 1 as saturated liquid and is compressed isentropically to the operating pressure of the boiler.
2-3: Constant Pressure heat addition: Saturated water enters the boiler and leaves it as superheated vapor at state 3.
3-4: Isentropic expansion: Superheated vapor expands isentropically in turbine and produces work.
4-1: Constant Pressure heat rejection: High quality steam is condensed in the condenser.
Energy Analysis for the Cycle
All four components of the Rankine cycle are steady-state steady-flow devices. The potential and kinetic energy effects can be neglected. The first law per unit mass of steam can be written as:
Pump q = 0, w pump,in = h 2 – h 1
Boiler w = 0, q in = h 3 – h 2
Turbine q = 0, w turbine,out = h 3 – h 4
Condenser w = 0, q out = h 4 – h 1
The thermal efficiency of the cycle is determined from:
ηth = Wnet / qin = 1 – qout / qin
Wnet = q in – q out = w turbine , out – w pump , in
If we consider the fluid to be incompressible, the work input to the pump will be:
(h 2 − h 1 ) = v(P 2 − P 1 )
h 1 = h [email protected] & v = v 1 = v [email protected]
Deviation of Actual Vapor Power Cycle from Ideal Cycle
As a result of irreversibilities in various components such as fluid friction and heat loss to the surroundings, the actual cycle deviates from the ideal Rankine cycle. The deviations of actual pumps and turbines from the isentropic ones can be accounted for by utilizing isentropic efficiencies defined as:
Increasing the Efficiency of Rankine Cycle
We know that the efficiency is proportional to:
That is, to increase the efficiency one should increase the average temperature at which heat is transferred to the working fluid in the boiler, and/or decrease the average temperature at which heat is rejected from the working fluid in the condenser.
- Decreasing the of Condenser Pressure (Lower T L)
Lowering the condenser pressure will increase the area enclosed by the cycle on a T-s diagram which indicates that the net work will increase. Thus, the thermal efficiency of the cycle will be increased.
The condenser pressure cannot be lowered than the saturated pressure corresponding to the temperature of the cooling medium. We are generally limited by the thermal reservoir temperature such as lake, river, etc. Allow a temperature difference of 10°C for effective heat transfer in the condenser. For instance lake @ 15°C + ∆T (10°C) = 25°C.
The steam saturation pressure (or the condenser pressure) then will be ⇒ P sat = 3.2 kPa.
2. Superheating the Steam to High Temperatures (Increase T H)
Superheating the steam will increase the net work output and the efficiency of the cycle. It also decreases the moisture contents of the steam at the turbine exit. The temperature to which steam can be superheated is limited by metallurgical considerations (~ 620°C).
3. Increasing the Boiler Pressure (Increase T H )
Increasing the operating pressure of the boiler leads to an increase in the temperature at which heat is transferred to the steam and thus raises the efficiency of the cycle. Note that for a fixed turbine inlet temperature, the cycle shifts to the left and the moisture content of the steam at the turbine exit increases. This undesirable side effect can be corrected by reheating the steam.
The Ideal Reheat Rankine Cycle
To take advantage of the increased efficiencies at higher boiler pressure without facing the excessive moisture at the final stages of the turbine, reheating is used. In the ideal reheating cycle, the expansion process takes place in two stages, i.e., the high-pressure and low-pressure turbines.
The total heat input and total turbine work output for a reheat cycle become:
q in = q primary + q reheat = ( h 3 – h 2 ) + ( h 5 – h 4 )
w turbine , out = w H P turbine + w L P turbine = ( h 3 – h 4 ) + ( h 5 – h 6 )
The incorporation of the single reheat in a modern power plant improves the cycle efficiency by 4 to 5 percent by increasing the average temperature at which heat is transferred to the steam.
The Ideal Regenerative Rankine Cycle
The regeneration process in steam power plants is accomplished by extracting (or bleeding) steam from turbine at various stages and feed that steam in heat exchanger where the feedwater is heated. These heat exchangers are called regenerator or feedwater heater (FWH). FWH also help removing the air that leaks in at the condenser (deaerating the feedwater).
Big Data and Hadoop – Training by Edureka Become a Hadoop Expert by mastering MapReduce, Yarn, Pig, Hive, HBase, Oozie, Flume and Sqoop while working on industry based Use-cases and Projects. Know More!
There are two types of FWH’s, open and closed.
Open (Direct‐Contact) Feedwater Heaters
An open FWH is basically a mixing chamber where the steam extracted from the turbine mixes with the feedwater exiting the pump. Ideally, the mixture leaves the heater as a saturated liquid at the heater pressure.
Using Fig. 8, the heat and work interactions of a regenerative Rankine cycle with one FWH can be expressed per unit mass of steam flowing through the boiler as:
q in =h 5 – h 4
q out = ( 1 – y )( h 7 – h 1 )
w turbine , out = ( h 5 – h 6 ) + ( 1 – y )( h 6 – h 7 )
w pump , in = ( 1 – y ) w Pump I + w Pump II
y = ṁ6 / ṁ5
w Pump I = v 1 ( P 2 – P 1 )
w Pump II = v 3 ( P 4 – P 3 )
Thermal efficiency of the Rankine cycle increases as a result of regeneration since FWH raises the average temperature of the water before it enters the boiler. Many large power plants have as many as 8 FWH’s.
Closed Feedwater Heaters
In closed FWH, heat is transferred from the extracted steam to the feedwater without any mixing taking place. Thus; two streams can be at different pressures, since they don’t mix. In an ideal closed FWH, the feedwater is heated to the exit temperature of the extracted steam, which ideally leaves the heater as a saturated liquid at the extraction pressure.
Many system and industries require energy input in the form of heat, called process heat. Some industries such as chemical, pulp and paper rely heavily on process heat. The process heat is typically supplied by steam at 5 to 7 atm and 150 to 200 C. These plants also require large amount of electric power. Therefore, it makes economical and engineering sense to use the already-existing work potential (in the steam entering the condenser) to use as process heat. This is called cogeneration.
In the cogeneration cycle shown in the above figure, at times of high demands for process heat, all the steam is routed to the process heating unit and none to the condenser.
Combined Gas‐Vapor Power Cycle
Gas-turbine cycles typically operate at considerably higher temperatures than steam cycles. The maximum fluid temperature at the turbine inlet is about 620C for modern steam power plants, but over 1425C for gas-turbine power plants. It is over 1500C at the burner exit of turbojet engines. It makes engineering sense to take advantage of the very desirable characteristics of the gas-turbine cycle at high-temperature and to use the high temperature exhaust gases as the energy source for the bottoming cycle as a steam power cycle. This is called combined cycle. Combined cycles can achieve high thermal efficiencies, some of recent ones have η about 60%.
M. Bahrami ENSC 461 (S 11) Vapor Power Cycles
Introduction to Sample Characterization
Sample Characterization, when used in materials science, refers to the broad and general process by which a material’s structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term’s use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals.
The characterization technique optical microscopy showing the micron scale dendritic microstructure of a bronze alloy.
While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and Secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years. Common techniques and instruments used in sample characterization are explained below:
Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample’s structure on a range of length scales. Some common examples of microscopy instruments include:
- Optical Microscope
- Scanning Electron Microscope (SEM)
- Transmission Electron Microscope (TEM)
- Field Ion Microscope (FIM)
- Scanning Tunneling Microscope (STM)
- Scanning probe microscopy (SPM)
- Atomic Force Microscope (AFM)
- X-ray diffraction topography (XRT)
The most common type of microscope (and the first invented) is the optical microscope. This is an optical instrument containing one or more lenses producing an enlarged image of a sample placed in the focal plane. Optical microscopes have refractive glass and occasionally of plastic or quartz, to focus light into the eye or another light detector. Mirror-based optical microscopes operate in the same manner. Typical magnification of a light microscope, assuming visible range light, is up to 1250x with a theoretical resolution limit of around 0.250 micrometres or 250 nanometres. This limits the practical magnification limit to ~1500x. Specialized techniques (e.g., scanning confocal microscopy, Vertico SMI) may exceed this magnification but the resolution is diffraction limited. The use of shorter wavelengths of light, such as the ultraviolet, is one way to improve the spatial resolution of the optical microscope, as are devices such as the near-field scanning optical microscope.
A 40x magnification image of cells in a medical smear test taken through an optical microscope using a wet mount technique
Sarfus, a recent optical technique increases the sensitivity of standard optical microscope to a point it becomes possible to directly visualize nanometric films (down to 0.3 nanometre) and isolated nano-objects (down to 2 nm-diameter). The technique is based on the use of non-reflecting substrates for cross-polarized reflected light microscopy.
Ultraviolet light enables the resolution of microscopic features, as well as to image samples that are transparent to the eye. Near infrared light can be used to visualize circuitry embedded in bonded silicon devices, since silicon is transparent in this region of wavelengths.
In fluorescence microscopy, many wavelengths of light, ranging from the ultraviolet to the visible can be used to cause samples to fluoresce to allow viewing by eye or with the use of specifically sensitive cameras.
Phase contrast microscopy is an optical microscopy illumination technique in which small phase shifts in the light passing through a transparent specimen are converted into amplitude or contrast changes in the image. The use of phase contrast does not require staining to view the slide. This microscope technique made it possible to study the cell cycle in live cells.
The traditional optical microscope has more recently evolved into the digital microscope. In addition to, or instead of, directly viewing the object through the eyepieces, a type of sensor similar to those used in a digital camera is used to obtain an image, which is then displayed on a computer monitor. These sensors may use CMOS or charge-coupled device (CCD) technology, depending on the application.
Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each of which is spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera.
Scanning Electron Microscope
A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning the surface with a focused beam of electrons. The electrons interact with atoms in the sample, producing various signals that contain information about the sample’s surface topography and composition. The electron beam is scanned in a raster scan pattern, and the beam’s position is combined with the detected signal to produce an image. SEM can achieve resolution better than 1 nanometer. Specimens can be observed in high vacuum in conventional SEM, or in low vacuum or wet conditions in variable pressure or environmental SEM, and at a wide range of cryogenic or elevated temperatures with specialized instruments.
These pollen grains taken on an SEM show the characteristic depth of field of SEM micrographs.
The most common SEM mode is detection of secondary electrons emitted by atoms excited by the electron beam. The number of secondary electrons that can be detected depends, among other things, on specimen topography. By scanning the sample and collecting the secondary electrons that are emitted using a special detector, an image displaying the topography of the surface is created.
Energy Dispersive spectroscopy
EDS is a spectroscopy technique providing information on the chemical composition of material. The electrons pathing through the sample can scatter elastically without energy loss or in-elastically with energy loss. Elastic scattering normally provides the image information in the transmission electron microscope, while inelastic scattering provides information about chemical compositions or electronic structure of the material.
A graph plots the concetrations of chemicals on a carbon support film Cu grid.
In EDS spectroscopy the inelastic scattering of the primary electron excites an electron of the atom shell. The excited atom is responding by a recombination of a higher energy level electron into the empty shell position. During this process an X-ray is generated carrying the energy difference of the two electron shells involved.
For each atomic species of the periodic table these energy differences are unique and therefore the atom species illuminated by the electron beam can be identified. The intensity of the X-ray signal is proportional to the concentration of the elements and the ratio between the X-ray peaks in the spectrum can be used to determine the composition of the material.
In scanning transmission electron microscopy a small electron beam is moved across the sample generating an image by detection of the variation of the elastic scattered electrons in each point(bright field, dark field imaging). In EDS mapping simultaneously the X-ray emission in each point is acquired, providing a chemical map delivering structural and chemical information in one scan.
Note: If you have further questions, Please ask in Forum
Introduction to Renewable energy
In industrial revolution era the conventional energy sources such oil, coal, and natural gas have proven to be highly effective drivers of economic progress, but at the same time emissions from such sources have damaged our environment contributing to global warming and consequent climate change. The Inter Governmental panel on Climate Change (IPCC) has been discussing and raising the issue of containing emissions from fossil fuels and other sources through available scientific and technological options to mitigate the problems of climate change due to gaseous emissions and rising CO2 levels on account of an oil and coal-powered global economy. Use of carbon sequestration measures and clean and green renewable energy has been persistently suggested. Renewable energy sources such as biomass, wind, solar, hydropower, and geothermal can provide sustainable energy services. Switching over to renewable-based energy systems is being increasingly considered by various countries globally. With refinements in technology the feasibility and cost of solar and wind power systems have become affordable. Also with the policy interventions and technology refinements, market systems are rapidly evolving in favor
of renewable energy systems.
Renewable energy supply is dominated by traditional biomass, mostly fuel wood used for cooking and heating, especially in developing countries in Africa, Asia and Latin America. A major contribution is also obtained from the use of large hydropower and solar energy, wind energy, modern bio-energy, geothermal energy, and small hydropower energy sources are being increasingly tapped. Such a situation calls for implementation of aggressive long-term renewable energy programmes and creating awareness about benefits of renewable energy in urban and rural settings for domestic and commercial purposes.
What is Renewable Energy?
Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:
Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.
Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services.
Airflows can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 5 MW of rated power, although turbines with rated output of 1.5–3 MW have become the most common for commercial use. The largest generator capacity of a single installed onshore wind turbine reached 7.5 MW in 2015. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine. Areas where winds are stronger and more constant, such as offshore and high altitude sites, are preferred locations for wind farms. Typically full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.
The largest wind farm project in Africa
Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
In 2015 hydropower generated 16.6% of the world’s total electricity and 70% of all renewable electricity. Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
Historically hydroelectric power came from constructing large hydroelectric dams and reservoirs, which are still popular in third world countries. The largest of which is the Three Gorges Dam (2003) in China and the Itaipu Dam (1984) built by Brazil and Paraguay.
Three Gorges Dam China
Small hydro systems are hydroelectric power installations that typically produce up to 50 MW of power. They are often used on small rivers or as a low impact development on larger rivers. China is the largest producer of hydroelectricity in the world and has more than 45,000 small hydro installations.
Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world’s highest tidal flow.
Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis. Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect. Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP. Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
In 2011, the International Energy Agency said that “the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries’ energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise.
High Temperature Geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth’s geothermal energy originates from the original formation of the planet and from radioactive decay of minerals. The geothermal gradient, which is the difference in temperature between the core of the earth and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth’s core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).
Low Temperature Geothermal refers to the use of the outer crust of the earth as a Thermal Battery to facilitate Renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of Geothermal, a Geothermal Heat Pump and Ground-coupled heat exchanger are used together to move heat energy into the earth (for cooling) and out of the earth (for heating) on a varying seasonal basis. Low temperature Geothermal (generally referred to as “GHP”) is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements.
Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass. As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today; examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo, and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
Biomass can be converted to other usable forms of energy like methane gas or transportation fuels like ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products like vegetable oils and animal fats. Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.
Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels. Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.
Environment Agency, biofuels do not address global warming concerns. Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe.
Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass.
Energy storage is a collection of methods used to store electrical energy on an electrical power grid, or off it. Electrical energy is stored during times when production (especially from intermittent power plants such as renewable electricity sources such as wind power, tidal power, and solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity is used for more than 90% of all grid power storage.
- Renewable energy resources 2nd edition by John Twidell and Tony weir
- RENEWABLE ENERGY SOURCES AND THEIR APPLICATIONS by R.K. Behl, R.N. Chhibar, S. Jain, V.P. Bahl, N.El Bassam
Android Live Online Training by Edureka
Introduction to Stress
In this article we will learn about the types of stresses i.e. normal stress, tensile stress, compressive stress and shear stress. We will discuss about the definition and formula of each stress in detail.
Definition of Stress
Stress is defined as the internal resistance set up by a body when it is deformed. It is measured in N/m^2 and this unit is specifically called Pascal (Pa). A bigger unit of stress is the mega Pascal (MPa).
1 Pa = 1N/m^2,
1MPa = 106 N/m^2 =1N/mm^2.
The term stress is used to express the loading in terms of force applied to a certain cross-sectional area of an object. From the perspective of loading, stress is the applied force or system of forces that tends to deform a body. From the perspective of what is happening within a material, stress is the internal distribution of forces within a body that balance and react to the loads applied to it. The stress distribution may or may not be uniform, depending on the nature of the loading condition. For example, a bar loaded in pure tension will essentially have a uniform tensile stress distribution. However, a bar loaded in bending will have a stress distribution that changes with distance perpendicular to the normal axis.
Simplifying assumptions are often used to represent stress as a vector quantity for many engineering calculations and for material property determination. The word “vector” typically refers to a quantity that has a “magnitude” and a “direction”. For example, the stress in an axially loaded bar is simply equal to the applied force divided by the bar’s cross-sectional area.
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in pre-stressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
Types of Stress
A stress acts on a body may be normal stress or shear stress.
Normal stress is a stress that acts perpendicular to the area. The formula for the normal stress is given by
The normal stress is again subdivided into two parts.
- The stress which induced in a body when it is subjected to two equal and opposite pulls as shown in the figure given below is called tensile stress.
- Due to the tensile stress there is an increase in the length of the body and decrease in the cross section area of the body.
- Tensile stress is a type of normal stress, so it acts at 90 degree to the area.
- The strain which is induced due to tensile stress is called tensile strain. It is equals to the ratio of increase in the length to the original length.
- The stress which induced in a body when it is subjected to two equal and opposite pushes as shown in the figure given below is called compressive stress.
- Due to the compressive stress, there is a decrease in the length and increase in the cross section area of the body.
- Compressive stress is also a type of normal stress and so it also acts at 90 degree to the area.
- The strain which is induced due to compressive stress is called compressive strain. It is equals to the ratio of decrease in the length to the original length.
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the mid plane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed by the single number Τ (tau) = F/A, where F is the magnitude of those forces and A is the area of the layer.
However, unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it. For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average (“nominal”, “engineering”) stress. However, that average is often sufficient for practical purposes. Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the “web”) of I-beams under bending loads, due to the web constraining the end plates (“flanges”).
- Shear stress induced in a body when it is subjected to two equal and opposite forces that acts tangential to the area.
- The strain produced due to the shear stress is called shear strain.
- The shear stress is denoted by the symbol τ (tau). It is a Greek letter.
- It is defined as ratio of shear resistance to the shear area.
- The formula for the shear stress is given below.
- Shear stress is responsible for the change in the shape of the body. It does on affect the volume of the body.
The equations of fluid mechanics – the Navier-Stokes equations, are solvable analytically for only a limited number of flows under certain assumptions. The known solutions are extremely useful in helping to understand fluid flow but rarely can they be used directly in engineering analysis or design. Therefore it is obligatory to use other approaches in order to solve practical problems.
In experimental analysis, the problem is that many flows require several dimensionless parameters for their specification and it may be impossible to set up an experiment which correctly scales the actual flow. Examples are flows around aircraft or ships. In order to achieve the same Reynolds number with smaller models, fluid velocity has to be increased. For aircraft, this may give too high a Mach number if the same fluid (air) is used; one tries to find a fluid which allows matching of both parameters. For ships, the issue is to match both the Reynolds and Froude numbers, which is nearly impossible.
Today, due to increased computational power of advance computers, interest in numerical techniques increased dramatically. Solution of the equations of fluid mechanics on computers has become so important that it now occupies the attention of perhaps a third of all researchers in fluid mechanics and the proportion is still increasing. This field is known as computational fluid dynamics (CFD). Contained within it are many subspecialties. We shall discuss only a small subset of methods for solving the equations describing fluid flow and related phenomena.
What is CFD?
Flows and related phenomena can be described by partial differential equations, which cannot be solved analytically except in special cases. To obtain an approximate solution numerically, we have to use a discretization method which approximates the differential equations by a system of algebraic equations, which can then be solved on a computer. The approximations are applied to small domains in space and/or time so the numerical solution provides results at discrete locations in space and time. Much as the accuracy of experimental data depends on the quality of the tools used, the accuracy of numerical solutions is dependent on the quality of discretizations used. Contained within the broad field of computational fluid dynamics are activities that cover the range from the automation of well-established engineering design methods to the use of detailed solutions of the Navier-Stokes equations as substitutes for experimental research into the nature of complex flows. At one end, one can purchase design packages for pipe systems that solve problems in a few seconds or minutes on personal computers or workstations. On the other, there are codes that may require hundreds of hours on the largest super-computers.
Finite Difference Method:
This is the oldest method for numerical solution of PDE’s, believed to have been introduced by Euler in the 18th century. It is also the easiest method to use for simple geometries. The starting point is the conservation equation in differential form. The solution domain is covered by a grid. At each grid point, the differential equation is approximated by replacing the partial derivatives by approximations in terms of the nodal values of the functions. The result is one algebraic equation per grid node, in which the variable value at that and a certain number of neighbor nodes appear as unknowns.
In principle, the FD method can be applied to any grid type. However, in all applications of the FD method known to the authors, it has been applied to structured grids. The grid lines serve as local coordinate lines. Taylor series expansion or polynomial fitting is used to obtain approximations to the first and second derivatives of the variables with respect to the coordinates. When necessary, these methods are also used to obtain variable values at locations other than grid nodes (interpolation).
On structured grids, the FD method is very simple and effective. It is especially easy to obtain higher-order schemes on regular grids. The disadvantage of FD methods is that the conservation is not enforced unless special care is taken. Also, the restriction to simple geometries is a significant disadvantage in complex flows.
Basic Steps in Finite Difference Method:
- In order to solve a given PDE by numerical methods, the partial differentials of the dependent variable in PDEs must be approximated by finite difference relations (algebraic equations).
- Solution of a steady PDE in a two-dimensional rectangular domain with initial and boundary conditions.
- Division of domain in uniform/non-uniform mesh.
- Formulation of algebraic equations for each grid point/control volume (matrix system Ax = b).
- Methods for finite difference approximations.
- Taylor series expansions.
- Finite Difference by Polynomials.
2. Finite Volume Method:
The FV method uses the integral form of the conservation equations as its starting point. The solution domain is subdivided into a finite number of contiguous control volumes (CVs), and the conservation equations are applied to each CV. At the centroid of each CV lies a computational node at which the variable values are to be calculated. Interpolation is used to express variable values at the CV surface in terms of the nodal (CV-center) values. Surface and volume integrals are approximated using suitable quadrature formulae. As a result, one obtains an algebraic equation for each CV, in which a number of neighbor nodal values appear.
The FV method can accommodate any type of grid, so it is suitable for complex geometries. The grid defines only the control volume boundaries and need not be related to a coordinate system. The method is conservative by construction, so long as surface integrals (which represent convective and diffusive fluxes) are the same for the CVs sharing the boundary.
The FV approach is perhaps the simplest to understand and to program. All terms that need be approximated have physical meaning which is why it is popular with engineers. The disadvantage of FV methods compared to FD schemes is that methods of order higher than second are more difficult to develop in 3D. This is due to the fact that the FV approach requires three levels of approximation: interpolation, differentiation, and integration.
Basic Steps in Finite Volume Method:
- Divide the continuous domain into a number of discrete subdomains (control volumes) by a grid. The grid defines the boundaries of a control volume, while the computational node lies at the center of each control volume.
- For each sub-domain, derive governing algebraic equations from the governing differential equations.
- Obtain a system of algebraic equations from above.
- Solve the above system of algebraic equations to obtain values of the dependent variables at identified discrete points (computational nodes).
3. Finite Element Method:
The FE method is similar to the FV method in many ways. The domain is broken into a set of discrete volumes or finite elements that are generally unstructured; in 2D, they are usually triangles or quadrilaterals, while in 3D, tetrahedra or hexahedra are most often used. The distinguishing feature of FE methods is that the equations are multiplied by a weight function before they are integrated over the entire domain. In the simplest FE methods, the solution is approximated by a linear shape function within each element in a way that guarantees continuity of the solution across element boundaries. Such a function can be constructed from its values at the corners of the elements. The weight function is usually of the same form.
This approximation is then substituted into the weighted integral of the conservation law and the equations to be solved are derived by requiring the derivative of the integral with respect to each nodal value to be zero; this corresponds to selecting the best solution within the set of allowed functions (the one with minimum residual). The result is a set of non-linear algebraic equations.
An important advantage of finite element methods is the ability to deal with arbitrary geometries; there is an extensive literature devoted to the construction of grids for finite element methods. The grids are easily refined; each element is simply subdivided. Finite element methods are relatively easy to analyze mathematically and can be shown to have optimality properties for certain types of equations. The principal drawback, which is shared by any method that uses unstructured grids, is that the matrices of the linearized equations are not as well structured as those for regular grids making it more difficult to find efficient solution methods.
Basic Steps in Finite Element Method:
The analysis of a structure by the Finite Element Method can be divided into several distinctive steps. These steps are to a large extent similar to the steps defined for the matrix method. Here we give a theoretical approach to the method, and its different steps.
- Discretization is the process of dividing your problem into several small elements, connected with nodes. All elements and nodes must be numbered so that we can set up a matrix of connectivity. The picture to the right shows discretization of a transverse frame into beam elements and discretization of a plane stress problem into quadrilateral elements. It is important to remember that the order the nodes and elements are numbered greatly affects the computing time. This is because we get a symmetrical, banded stiffness matrix, which bandwidth is dependent on the difference in the node numbers for each element, and this bandwidth is directly connected width the number of calculations the computer has to do. Computer FEM-programs have internal numbering that optimizes this bandwidth to a minimum by doing some internal renumbering of nodes if they are not optimal.
- The element analysis have two key components; Expressing the displacements within the elements, and maintaining equilibrium of the elements. In addition, stress-strain relationships are needed to maintain compatibilty. The final result is the element stiffness relationship: S = kv. For beam elements this relationship was obtained using the exact relationships between forces and moments and the corresponding displacements.These results could therefore be interpreted as being obtained by the governing differential equation and boundary condition of the beam elements. For e.g. a plane stress problem it is not possible to use an exact solution. The displacements within the elements are expressed in terms of shape functions scaled by the node displacements. Hence, by assuming expressions for the shape functions, the displacements in an arbitrary point within the element is determined by the nodal point displacement. The section of the structure that the element is representing is kept in place by the stresses along the edges. In the finite element analysis it is convenient to work with nodal point forces. The edge stresses may in the general case be replaced by equivalent nodal point forces by demanding the element to be in an integrated equilibrium using work or energy considerations. This technique is often referred to as to “lump” the edge forces to nodal forces.
This requirement result in a relationship between the nodal point displacements and forces to be given as:
S = kv + S0, where:
S– generalized nodal point forces
k– element stiffness matrix
v– nodal point displacements
S0– nodal point forces for external loads
Computer programs usually have many options for types of elements to choose among. Here the most usual elements:
- In System Analysis a relationship between the load and the nodal point displacements is established by demanding equilibrium for all nodal points in the structure:
The stiffness matrix is established by directly adding the contributions from the element stiffness matrices. Similarly the load vector R is obtained from the known nodal forces.
- Boundary conditions are introduced by setting nodal displacements to known values or spring stifnesses are added.
- The global displacements are found by solving the linear set of equations stated above:
r = K-1(R – R0)
- The stresses are determined from the strains by Hooke’s law. Strains are derived from the displacement functions within the element combined with Hooke’s law. They may be expressed generally by:
v = a r
D – Hooke’s law on matrix form
B – Derived from u(x, y, z)
Output interpretation programs, called post-processors, help the user sort out the output and display it in graphical form.
Top PMP exam preparation online course Crack PMP exam and get the pre-requisite 35 contact hours of project management education.
- Computational Methods for Fluid Dynamics by J.H. Ferziger and M. Peric.
- Lecture notes on Computational Fluid Dynamics by Dr. Tariq Talha, College of EME, NUST, Islamabad, Pakistan.
- Finite volume method basics, principles and examples by Aniket Batra, IIT Guwahati.