Finite Element Analysis FEA Terms and Definitions (A to Z) Part-2

(A to Z) of Finite Element Analysis

Finite Element Analysis FEA Terms and Definitions (A to Z) Part-1



Same as complex eigenvalues.
Same as complex eigenvectors.
The frequency at which the damped system vibrates naturally when only an initial disturbance is applied.
Any mechanism that dissipates energy in a vibrating system.

The damping factor is the ratio of the actual damping to the critical damping. It is often specified as a percentage. If the damping factor is less than one then the system can undergo free vibrations. The free vibrations will decay to zero with time. If the damping factor is greater than one then the decay is exponential and no vibrations occur. For most structures the damping factor is very small.
Elements that are defined as one shape in the basis space but they are a simpler shape in the real space. A quadrilateral can degenerate into a triangle. A brick element can degenerate into a wedge, a pyramid or a tetrahedron. Degenerate elements should be avoided in practice.
The number of equations of equilibrium for the system. In dynamics, the number of displacement quantities which must be considered in order to represent the effects of all of the significant inertia forces. Degrees of freedom define the ability of a given node to move in any direction in space.
There are six types of DOF for any given node:
3 possible translations (one each in the X,Y and Z directions) and 3 possible rotations (one rotation about each of the X,Y, and X axes).
DOF are defined and restricted by the elements and constraints associated with each node.
The Jacobian matrix is used to relate derivatives in the basis space to the real space. The determinant of the Jacobian – det(j) -is a measure of the distortion of the element when mapping from the basis to the real space.
The applied loading is a known function of time.
A measure of stress where the hydrostatic stress has been subtracted from the actual stress. Material failures that are flow failures (plasticity and creep) fail independently of the hydrostatic stress. The failure is a function of the deviatoric stress.
When a matrix is factorized into a triangular form the ratio of a diagonal term in the factorized matrix to the corresponding term in the original matrix decreases in size as one moves down the diagonal. If the ratio goes to zero the matrix is singular and if it is negative the matrix is not positive definite. The diagonal decay can be used as an approximate estimate of the condition number of the matrix.
The eigenvectors of a system can be used to define a coordinate transformation such that, in these generalized coordinates the coefficient matrices (typically mass and stiffness) are diagonal.
If there is a stress concentration in a structure the high stress will reduce rapidly with distance from the peak value. The distance over which it drops to some small value is called the die-away length. A fine mesh is required over this die-away length for accurate stress results.
The name for various techniques for numerically integrating equations of motion. These are either implicit or explicit methods and include central difference, Crank-Nicholson, Runge-Kutta, Newmark beta and Wilson theta.
The cosines of the angles a vector makes with the global x,y,z axes.
The model is defined in terms of an ordinary differential equation and the system has a finite number of degrees of freedom.
The process of dividing geometry into smaller pieces (finite elements) to prepare for analysis, i.e. Meshing.
A form of discrete parameter model where the displacements of the system are the basic unknowns.
The distance, translational and rotational, that a node travels from its initial position to its post-analysis position. The total displacement is represented by components in each of the 3 translational directions and the 3 rotational directions.
Plots showing the deformed shape of the structure. For linear small deflection problems the displacements are usually multiplied by a magnifying factor before plotting the deformed shape.
The nodal displacements written as a column vector.
If two connecting elements have different shape functions along the connection line they are said to be incompatible. This should be avoided since convergence to the correct solution cannot be guarantied.
Elements are defined as simple shapes in the basis space, quadrilaterals are square, triangles are isosoles triangles. If they are not this shape in the real space they are said to be distorted. Too much distortion can lead to errors in the solution
An equivalent stress measure for friction materials (typically sand). The effect of hydrostatic stress is included in the equivalent stress.
An analysis that includes the effect of the variables changing with time as well as space.
The factor relating the steady state displacement response of a system to a sinusoidal force input. It is the same as the recep tance.
A modeling process where consideration as to time effects in addition to spatial effects are included. A dynamic model can be the same as a static model or it can differ significantly depending upon the nature of the problem.
The time dependent response of a dynamic system in terms of its displacement, velocity or acceleration at any given point of the system.
If the structure is vibrating steadily at a frequency w then the dynamic stiffness is (K+iwC w2M) It is the inverse of the dynamic flexibility matrix.
Stresses that vary with time and space.
Special forms of substructuring used within a dynamic analysis. Dynamic substructuring is always approximate and causes some loss of accuracy in the dynamic solution.



Problems that require calculation of eigenvalues and eigenvectors for their solution. Typically solving free vibration problems or finding buckling loads.
The roots of the characteristic equation of the system. If a system has n equations of motion then it has n eigenvalues. The square root of the eigenvalues are the resonant frequencies. These are the frequencies that the structure will vibrate at if given some initial disturbance with no other forcing. There are other problems that require the solution of the eigenvalue problem, the buckling loads of a structure are eigenvalues. Latent roots and
characteristic values are synonyms for eigenvalues.
The displacement shape that corresponds to the eigenvalues. If the structure is excited at a resonant frequency then the shape that it adopts is the mode shape corresponding to the eigenvalue. Latent vectors and normal modes are the same as eigenvectors.
If a structure is sitting on a flexible foundation the supports are treated as a continuous elastic foundation. The elastic foundation can have a significant effect upon the structural response.
If the relationship between loads and displacements is linear then the problem is elastic. For a multi-degree of freedom system the forces and displacements are related by the elastic stiffness matrix.
Electro-magnetic and electro-static problems form electric field problems.

In the finite element method the geometry is divided up into elements, much like basic building blocks. Each element has nodes associated with it. The behavior of the element is defined in terms of the freedoms at the nodes.
Individual element matrices have to be assembled into the complete stiffness matrix. This is basically a process of summing the element matrices. This summation has to be of the correct form. For the stiffness method the summation is based upon the fact that element displacements at common nodes must be the same.
Stresses and strains within elements are usually defined at the Gauss points (ideally at the Barlow points) and the node points. The most accurate estimates are at the reduced Gauss points (more specifically the Barlow points). Stresses and strains are usually calculated here and extrapolated to the node points.
Methods for defining equations of equilibrium and compatibility through consideration of possible variations of the energies of the system. The general form is Hamiltons principle and sub-sets of this are the principle of virtual work including the principle of virtual displacements (PVD) and the principle of virtual forces (PVF).
Each eigenvector (mode shape or normal mode) can be multiplied by an arbitrary constant and still satisfy the eigenvalue equation. Various methods of scaling the eigenvector are used Engineering normalization -The vector is scaled so that the largest absolute value of any term in the eigenvector is unity. This is useful for inspecting printed tables of eigenvectors. Mathematical normalization -The vector is scaled so that the diagonal
modal mass matrix is the unit matrix. The diagonal modal stiffness matrix is the system eigenvalues. This is useful for response calculations.
Internal forces and external forces must balance. At the infinitesimal level the stresses and the body forces must balance. The equations of equilibrium define these force balance conditions.
Most of the current finite elements used for structural analysis are defined by assuming displacement variations over the element. An alternative approach assumes the stress variation over the element. This leads to equilibrium finite elements.
Equivalent material properties are defined where real material properties are smeared over the volume of the element. Typically, for composite materials the discrete fiber and matrix material properties are smeared to give average equivalent material properties.
A three dimensional solid has six stress components. If material properties have been found experimentally by a uniaxial stress test then the real stress system is related to this by combining the six stress components to a single equivalent stress. There are various forms of equivalent stress for different situations. Common ones are Tresca, Von-Mises, Mohr-Coulomb and Drucker-Prager.

A random process where any one-sample record has the same characteristics as any other record.
For non-linear large deflection problems the equations can be defined in various ways. If the material is flowing though a fixed grid the equations are defined in Eulerian coordinates. Here the volume of the element is constant but the mass in the element can change. If the grid moves with the body then the equations are defined in Lagrangian coordinates. Here the mass in the element is fixed but the volume changes.
Solutions that satisfy the differential equations and the associated boundary conditions exactly. There are very few such solutions and they are for relatively simple geometries and loadings.
These are methods for integrating equations of motion. Explicit methods can deal with highly non-linear systems but need small steps. Implicit methods can deal with mildly nonlinear problems but with large steps.
The process of estimating a value of a variable from a tabulated set of values. For interpolation values inside the table are estimated. For extrapolation values outside the table are estimated. Interpolation is generally accurate and extrapolation is only accurate for values slightly outside the table. It becomes very inaccurate for other cases.


If a curved line or surface is modeled by straight lines or flat surfaces then the modeling is said to produce a faceted geometry.
A method for calculating Fourier transforms that is computationally very efficient.
Problems that can be defined by a set of partial differential equations are field problems. Any such problem can be solved approximately by the finite element method.
A numerical method for solving partial differential equations by expressing them in a difference form rather than an integral form. Finite difference methods are very similar to finite element methods and in some cases are identical.
The process of setting up a model for analysis, typically involving graphical generation of the model geometry, meshing it into finite elements, defining material properties, and applying loads and boundary conditions.
A technique related to the finite element method. The equations are integrated approximately using t he weighted residual method, but a different form of weighting function is used from that in the finite element method. For the finite element method the Galerkin form of the weighted residual method is used.


All degrees of freedom are restrained for this condition. The nodes on the fixed boundary can not move: translation or rotation.
The conventional form of the finite element treats the displacements as unknowns, which leads to a stiffness matrix form. Alternative methods treating the stresses (internal forces) as unknowns lead to force methods with an associated flexibility matrix. The inverse of the stiffness matrix is the flexibility matrix.
The dynamic motion results from a time varying forcing function.
The dynamic forces that are applied to the system.
Functions that repeat themselves in a regular manner can be expanded in terms of a Fourier series.
A method for finding the frequency content of a time varying signal. If the signal is periodic it gives the same result as the Fourier series.
The Fourier transform and its inverse which, together, allow the complete system to be transformed freely in either direction between the time domain and the frequency domain.
If a structure is idealized as a series interconnected line elements then this forms a framework analysis model. If the connections between the line elements a re pins then it is a pin-jointed framework analysis. If the joints are rigid then the lines must be beam elements.
The dynamic motion which results from specified initial conditions. The forcing function is zero.
The structures forcing function and the consequent response is defined in terms of their frequency content. The inverse Fourier transform of the frequency domain gives the corresponding quantity in the time domain.
A form of solving the finite element equations using Gauss elimination that is very efficient for the finite element form of equations.

Finite Element Analysis FEA Terms and Definitions (A to Z) Part-3




Finite Element Analysis FEA Terms and Definitions (A to Z) Part-1

(A to Z) of Finite Element Analysis


The second time derivative of the displacement (the first time derivative of the velocity).
An adaptive finite element solver iteratively performs finite element analysis, determines the areas of the mesh where the solution is not sufficiently accurate and refines the mesh in those areas until the solution obtains the prescribed degree of accuracy. Adaptive Meshing involves automatically improving the mesh where necessary to meet specified convergence criteria.
The eigenvalue problem when written in the form of stiffness times mode shape minus eigenvalue times mass times mode shape is equal to zero. It is the form that arises naturally from a discrete parameter model in free vibration.
The ratio of the longest to shortest side lengths on an element.
Geometric:Two or more parts mated together.
FEA: The process of assembling the element matrices together to form the global matrix.
Typically element stiffness matrices are assembled to form the complete stiffness matrix of the structure.
The process of generating a mesh of elements over the volume that is being analyzed. There are two forms of automatic mesh generation: Free Meshing -Where the mesh has no structure to it. Free meshing generally uses triangular and tetrahedral elements.
Mapped Meshing -Where large regions, if not all, of the volume is covered with regular meshes. This can use any form of element. Free meshing can be used to fill any shape. Mapped meshing can only be used on some shapes without elements being excessively distorted.
If a shape can be defined by rotating a cross- section about a line (e.g. a cone) then it is said to be axisymmetric. This can be used to simplify the analysis of the system. Such models are sometimes called two and a half dimensional since a 2D cross- section represents a 3D body.



The set of Gauss integration points that give the best estimates of the stress for an element. For triangles and tetrahedra these are the full Gauss integration points. For quadrilateral and brick elements they are the reduced Gauss points.
When an element is being constructed it is derived from a simple regular shape in non-dimensional coordinates. The coordinates used to define the simple shape form the basis space. In its basis space a general quadrilateral is a 2×2 square and a general triangle is an isosceles triangle with unit side lengths.

A line element that has both translational and rotational degrees of freedom. It represents both membrane and bending actions.
Bending behavior is where the strains vary linearly from the centerline of a beam or center surface of a plate or shell.There is zero strain on the centerline for pure bending. Plane sections are assumed to remain plane. If the stresses are constant normal to the centerline then this is called membrane behavior.
A compressive and/or tensile stress resulting from the application of a nonaxial force to a structural member.
Mechanical loadings within the interior of the volume, typically inertia loadings in a stiffness analysis.
A method of solving differential equations by taking exact solutions to the field equations loaded by a point source and then finding the strengths of sources distributed around the boundary of the body required to satisfy the boundary conditions on the body.
Element shape functions that are zero along the edges of the element. They are non-zero within the interior of the element.
The situation where the elastic stiffness of the structure is cancelled by the effects of compressive stress within the structure. If the effect of this causes the structure to suddenly displace a large amount in a direction normal to the load direction then it is classical bifurcation buckling. If there is a sudden large movement in the direction of the
loading it is snap through buckling.



A method for numerically integrating second order dynamic equations of motion. It is widely used as a technique for solving non-linear dynamic problems.
Same as the eigenvalue.
Same as the eigenvector.
A method of solving a set of simultaneous equations that is especially well suited to the finite element method. It is sometimes called a skyline solution. Choose to optimize the profile o f the matrix if a renumbering scheme is used.
The system parameter relating force to velocity.
An nx1 matrix written as a vertical string of numbers. It is the transpose of a row vector.

Compatibility is satisfied if a field variable, typically the structural displacement, which is continuous before loading is continuous after loading. For linear problems the equations of compatibility must be satisfied. Nonlinearity in or non-satisfaction of, the compatibility equations leads to cracks and gaps in the structure. For finite element solutions compatibility of displacement is maintained within the element and across element
boundaries for the most reliable forms of solution.
Compatibility of strain is satisfied if strains that are continuous before loading are continuous after.
When the functions interpolating the field variable (typically the displacements) form a complete nth order polynomial in all directions.
The eigenvectors of a damped system. For proportionally damped systems they are the same as the undamped eigenvectors. For non-proportionally damped systems with damping in all modes less th an critical they are complex numbers and occur as complex conjugate pairs.
The eigenvalues of any damped system. If the damping is less than critical they will occur as complex conjugate pairs even for proportionally damped systems. The real part of the complex eigenvalue is a measure of the damping in the mode and should always be negative. The imaginary part is a measure of the resonant frequency.
A material that is made up of discrete components, typically a carbon-epoxy composite material or a glass-fiber material. Layered material and foam materials are also forms of composite materials.
A computer-based numerical study of turbulent fluid flow using approximate methods such as the finite element method, the fine difference method, the boundary element method, the finite volume methods, and so on.
The reduction of the size of a problem by eliminating (condensing out) some degrees of freedom. For static condensation the elimination process is based upon static considerations alone. In more general condensation it can include other effects, typically model condensation includes both static and dynamic effects.
The ratio of the highest eigenvalue to the lowest eigenvalue of a matrix. The exponent of this number gives a measure of the number of digits required in the computation to maintain numerical accuracy. The higher the condition number the more chance of numerical error and the slower the rate of convergence for iterative solutions.
Any scheme for numerically integrating dynamic equations of motion in a step-by- step form is conditionally stable if there is a maximum time step value that can be used. It is unconditionally stable (but not necessarily accurate) if any length of time step can be used.

A transformation of the coordinate system of the problem that preserves the symmetry of the system m atrices.
A method for solving simultaneous equations iteratively. It is closely related to the Lanczos method for finding the first few eigenvalues and eigenvectors of a set of equations.
The displacements and forces act at the same point and in the same direction so that the sum of their products give a work quantity. If consistent displacements and forces are used the resulting stiffness and mass matrices are symmetric.
For structural analysis an element must be able to reproduce a state of constant stress and strain under a suitable loading to ensure that it will converge to the correct solution. This is tested for using the patch test.
The equations defining the material behavior for an infinitesimal volume of material. For structures these are the stress -strain laws and include Hookes law for elasticity and the Prandle-Reuss equations for plasticity.
If one group of variables can be defined in terms of another group then the relationship between the two are constraint equations. Typically the displacements on the face of an element can be constrained to remain plane but the plane itself can move.
Known values of, or relationships between, the displacements in the coordinate system.
A contact problem occurs when two bodies that are originally apart can come together, or two bodies that are originally connected can separate.
The system mass is distributed between the degrees of freedom. The mass matrix is not diagonal.
The model is defined in terms of partial differential equations rather than in finite degree of freedom matrix form.
A graphical representation of the variation of a field variable over a surface, such as stress, displacement, or temperature. A contour line is a line of constant value for the variable. A contour band is an area of a single color for values of the variable within two limit values.
For a structural finite element to converge as the mesh is refined it must be able to represent a state of constant stress and strain free rigid body movements exactly. There are equivalent requirements for other problem types.

The integral relating the dynamic displacement response of the structure at any time t to the forces applied before this time.
The set of displacements used to define the degrees of freedom of the system.
A force and a displacement are said to correspond if they act at the same point and in the same direction. Forces and translational displacements can correspond as can moments and rotations. Corresponding forces and displacements can be multiplied together to give a work quantity. Using corresponding forces and displacements will always lead to a symmetric stiffness matrix.
An element that includes special functions to model the stress field at the tip of a crack. This is commonly achieved by using quadratic elements with mid side nodes at the quarter chord points.
The process by which a crack can propagate through a structure. It is commonly assumed that a crack initiates when a critical value of stress or strain is reached and it propagates if it can release more than a critical amount of energy by the crack opening.
A method for numerically integrating first order dynamic equations of motion. It is widely used as a technique for solving thermal transient problems.
This is a material property defining the minimum energy that a propagating crack must release in order for it to propagate. Three critical energies, or modes of crack propagation, have been identified. Mode 1 is the two surfaces of the crack moving apart. Mode 2 is where the two surfaces slide from front to back. Mode 3 is where the to surfaces slide sideways.
The dividing line between under damped and over damped systems where the equation of motion has a damping value that is equal to the critical damping.
A generalization of axisymmetry. The structure is composed of a series of identical sectors that are arranged circumferentially to form a ring. A turbine disc with blades attached is atypical example.

Finite Element Analysis FEA Terms and Definitions (A to Z) Part-2



Cooling Load Calculations and Principles in HVAC – Part 3

Design Information

To calculate the space cooling load, detailed building information, location, site and weather data, internal design information and operating schedules are required. Information regarding the outdoor design conditions and desired indoor conditions are the starting point for the load calculation and is discussed below.

Outdoor Design Weather Conditions

ASHRAE Handbook 1993 Fundamentals (Chapter 26) list tables of climate conditions for the US, Canada and other International locations: In these tables:

The information provided in table 1a, 2a and 3a are for heating design conditions that include:

  1. Dry bulb temperatures corresponding to 99.6% and 99% annual cumulative frequency of occurrence,
  2. Wind speeds corresponding to 1%, 2.5% and 5% annual cumulative frequency of occurrence,
  3. Wind direction most frequently occurring with 99.6% and 0.4% dry-bulb temperatures and
  4. Average of annual extreme maximum and minimum dry-bulb temperatures and standard deviations.

The information provided in table 1b, 2b and 3b are for cooling and humidity control conditions that include:

  1. Dry bulb temperature corresponding to 0.4%, 1.0% and 2.0% annual cumulative frequency of occurrence and the mean coincident wet-bulb temperature (warm). These conditions appear in sets of dry bulb (DB) temperature and the mean coincident wet bulb (MWB) temperature since both values are needed to determine the sensible and latent (dehumidification) loads in the cooling mode.
  2. Wet-bulb temperature corresponding to 0.4%, 1.0% and 2.0% annual cumulative frequency of occurrence and the mean coincident dry-bulb temperature
  3. Dew-point temperature corresponding to 0.4%, 1.0% and 2.0% annual cumulative frequency of occurrence and the mean coincident dry-bulb temperature and humidity ratio (calculated for the dew-point temperature at the standard atmospheric pressure at the elevation of the station).
  4. Mean daily range (DR) of the dry bulb temperature, which is the mean of the temperature difference between daily maximum and minimum temperatures for the warmest month (highest average dry-bulb temperature). These are used to correct CLTD values.

In choosing the HVAC outdoor design conditions, it is neither economical nor practical to design equipment either for the annual hottest temperature or annual minimum temperature, since the peak or the lowest temperatures may occur only for a few hours over the span of several years. Economically speaking short duration peaks above the system capacity might be tolerated at significant reductions in first cost; this is a simple risk – benefit decision for each building design.

Therefore, as a practice, the ‘design temperature and humidity’ conditions are based on frequency of occurrence. The summer design conditions have been presented for annual percentile values of 0.4, 1 and 2% and winter month conditions are based on annual percentiles of 99.6 and 99%. The term “design condition” refers to the %age of time in a year (8760 hours), the values of dry-bulb, dew-point and wet-bulb temperature exceed by the indicated percentage. The 0.4%, 1.0%, 2.0% and 5.0% values are exceeded on average by 35, 88, 175 and 438 hours.

The 99% and 99.6% cold values are defined in the same way but are viewed as the values for which the corresponding weather element are less than the design condition 88 and 35 hours, respectively. 99.6% value suggests that the outdoor temperature is equal to or lower than design data 0.4% of the time.

Design condition is used to calculate maximum heat gain and maximum heat loss of the building. For comfort cooling, use of the 2.5% occurrence and for heating use of 99% values is recommended. The 2.5% design condition means that the outside summer temperature and coincident air moisture content will be exceeded only 2.5% of hours from June to September or 73 out of 2928 hours (of these summer months) i.e. 2.5% of the time in a year, the outdoor air temperature will be above the design condition.

Note, in energy use calculations, hour-by-hour outdoor climate data of a design day should be adopted instead of summer and winter design values.

Indoor Design Conditions and Thermal Comfort

The indoor design conditions are directly related to human comfort. Current comfort standards, ASHRAE Standard 55-1992 and ISO Standard 7730, specify a “comfort zone,” representing the optimal range and combinations of thermal factors (air temperature, radiant temperature, air velocity, humidity) and personal factors (clothing and activity level) with which at least 80% of the building occupants are expected to express satisfaction. The environmental factors that affect the thermal comfort of the occupants in an air-conditioned space are mainly:

  1. Metabolic rate, expressed in met (1 met = 18.46 Btu/hr.ft²) determines the amount of heat that must be released from the human body and it depends mainly on the intensity of the physical activity.
  2. Indoor air temperature (Tr) and mean radiant temperature (Trad), both in °F. Tr affects both the sensible heat exchange and evaporative losses, and Trad affects only sensible heat exchange.
  3. Relative humidity of the indoor air in %, which is the primary factor that influences evaporative heat loss.
  4. Air velocity of the indoor air in fpm, which affects the heat transfer coefficients and therefore the sensible heat exchange and evaporative loss.
  5. Clothing insulation in clo (1 clo = 0.88 h.ft².°F/Btu), affects the sensible heat loss. Clothing insulation for occupants is typically 0.6 clo in summer and 0.8 to 1.2 clo in winter.

For comfort air-conditioning systems, according to ANSI/ASHRAE Standard 55-1992 and ASHRAE/IES Standard 90.1-1989, the following indoor design temperatures and air velocities apply for conditioned spaces where the occupant’s activity level is 1.2 met, indoor space relative humidity is 50% (in summer only), and Tr = Trad:


If a suit jacket is the clothing during summer for occupants, the summer indoor design temperature should be dropped to 74 to 75°F.

The recommended indoor relative humidity, in %, is:


The Psychrometric chapter of the Fundamentals Handbook(Chapter 6, 2001) provides more details on this aspect. The load calculations are usually based at 75°F dry bulb temperatures & 50% relative humidity.

Indoor Air Quality and Outdoor Air Requirements

According to the National Institute for Occupational Safety and Health (NIOSH), 1989, the causes of indoor air quality complaints in buildings are inadequate outdoor ventilation air. There are three basic means of improving indoor air quality: (1) eliminate or reduce the source of air pollution, (2) enhance the efficiency of air filtration, and (3) increase the ventilation(outdoor) air intake.

Abridged outdoor air requirements listed in ANSI/ASHRAE Standard 62-1989 are as follows:


These ventilation requirements are based on the analysis of dilution of CO2 as the representative human bio-effluent. As per ASHRAE standard 62-1999, comfort criteria with respect to human bio-effluents is likely to be satisfied, if the indoor carbon dioxide concentrations remain within 700 ppm above the outdoor air carbon dioxide concentration.

Refer to ANSI/ASHRAE Standard 62-1999 for details.

Building Pressurization

The outdoor air requirements are sometimes governed by the building pressurization needs. Most air-conditioning systems are designed to maintain a slightly higher pressure than the surroundings, a positive pressure, to prevent or reduce infiltration and untreated air entering the space directly. For laboratories, restrooms, or workshops where toxic, hazardous, or objectionable gases or contaminants are produced, a slightly lower pressure than the surroundings, a negative pressure, should be maintained to prevent or reduce the diffusion of these contaminants to the surrounding area.

For comfort air-conditioning systems, the recommended pressure differential between the indoor and outdoor air is 0.02 to 0.05 inch-WG. WG indicates the pressure at the bottom of a top-opened water column of specific inches of height; 1 in -WG = 0.03612 psig.


Building Characteristics

To calculate space heat gain, the following information on building envelope is required:

  1. Architectural plans, sections and elevations – for estimating building dimensions/area/volume
  2. Building orientation (N, S, E, W, NE, SE, SW, NW, etc), location etc
  3. External/Internal shading, ground reflectance etc.
  4. Materials of construction for external walls, roofs, windows, doors, internal walls, partitions, ceiling, insulating materials and thicknesses, external wall and roof colors – select and/or compute U-values for walls, roof, windows, doors, partitions, etc. Check if the structure is insulated and/or exposed to high wind.
  5. Amount of glass, type and shading on windows
Operating Schedules

Obtain the schedule of occupants, lighting, equipment, appliances, and processes that contribute to the internal loads and determine whether air conditioning equipment will be operated continuously or intermittently (such as, shut down during off periods, night set-back, and weekend shutdown). Gather the following information:

  • Lighting requirements, types of lighting fixtures
  • Appliances requirements such as computers, printers, fax machines, water coolers, refrigerators, microwave, miscellaneous electrical panels, cables etc
  • Heat released by the HVAC equipment.
  • Number of occupants, time of building occupancy and type of building occupancy

Design cooling load takes into account all the loads experienced by a building under a specific set of assumed conditions. The assumptions behind design cooling load are as follows:

  1. Weather conditions are selected from a long-term statistical database. The conditions will not necessary represent any actual year, but are representative of the location of the building. ASHRAE has tabulated such data.
  2. The solar loads on the building are assumed to be those that would occur on a clear day in the month chosen for the calculations.
  3. The building occupancy is assumed to be at full design capacity.
  4. The ventilation rates are either assumed on air changes or based on maximum occupancy expected.
  5. All building equipment and appliances are considered to be operating at a reasonably representative capacity.
  6. Lights and appliances are assumed to be operating as expected for a typical day of design occupancy.
  7. Latent as well as sensible loads are considered.
  8. Heat flow is analyzed assuming dynamic conditions, which means that heat storage in building envelope and interior materials is considered.
  9. The latent heat gain is assumed to become cooling load instantly, whereas the sensible heat gain is partially delayed depending on the characteristics of the conditioned space. According to the ASHRAE regulations, the sensible heat gain from people is assumed 30% convection (instant cooling load) and 70% radiative (delayed portion).
  10. Peak load calculations evaluate the maximum load to size and select the refrigeration equipment. The energy analysis program compares the total energy use in a certain period with various alternatives in order to determine the optimum one.
  11. Space (zone) cooling load is used to calculate the supply volume flow rate and to determine the size of the air system, ducts, terminals, and diffusers. The coil load is used to determine the size of the cooling coil and the refrigeration system. Space cooling load is a component of the cooling coil load.
  12. The heat transfer due to ventilation is not a load on the building but a load on the system.
Thermal Zoning

Thermal zoning is a method of designing and controlling the HVAC system so that occupied areas can be maintained at a different temperature than unoccupied areas using independent setback thermostats. A zone is defined as a space or group of spaces in a building having similar heating and cooling requirements throughout its occupied area so that comfort conditions may be controlled by a single thermostat.

When doing the cooling load calculations, always divide the building into zones. Always estimate the building peak load and individual zones airflow rate. The building peak load is used for sizing the refrigeration capacity and the individual zone loads are helpful in estimating the airflow rates (air-handling unit capacity).

In practice the corner rooms and the perimetric spaces of the building have variations in load as compared to the interior core areas. The following facts may be noted:

  • The buildings are usually divided into two major zones.
    • Exterior Zone: The area inward from the outside wall (usually 12 to 18 feet, if rooms do not line the outside wall). The exterior zone is directly affected by outdoor conditions during summer and winter.
    • Interior Zone: The area contained by the external zone. The interior zone is only slightly affected by outdoor conditions and usually has a uniform cooling.
  • Single-zone models shall be limited to open floor plans with perimeter walls not exceeding 40 feet in length.
  • For large building footprints, assume a minimum of five zones per floor: one zone for each exposure (north, south, east & west) and an interior zone.


Previous Blogs


Introduction to Finite Element Method/Finite Element Analysis (FEM/FEA)

The Finite Element Method (FEM) is a numerical technique used to perform Finite Element Analysis (FEA) of any given physical phenomenon.


The description of the laws of physics for space- and time-dependent problems are usually expressed in terms of partial differential equations (PDEs). For the vast majority of geometries and problems, these PDEs cannot be solved with analytical methods. Instead, an approximation of the equations can be constructed, typically based upon different types of discretizations. These discretization methods approximate the PDEs with numerical model equations, which can be solved using numerical methods. The solution to the numerical model equations are, in turn, an approximation of the real solution to the PDEs. The finite element method (FEM) is used to compute such approximations.

Basic Concepts

The finite element method(FEM), or finite element analysis (FEA), is based on the idea of building a complicated object with simple blocks, or, dividing a complicated object into small and manageable pieces. Application of this simple idea can be found everywhere in everyday life as well as in engineering.

· Lego (kids’ play)
· Buildings
· Approximation of the area of a circle:

Why Finite Element Method?

· Design analysis: hand calculations, experiments, and computer simulations.
· FEM/FEA is the most widely applied computer simulation method in engineering.
· Closely integrated with CAD/CAM applications.

Applications of FEM in Engineering

· Mechanical/Aerospace/Civil/Automobile Engineering
· Structure analysis (static/dynamic, linear/nonlinear)
· Thermal/fluid flows
· Electromagnetics
· Geomechanics
· Biomechanics

FEM in Structural Analysis

· Divide structure into pieces (elements with nodes)
· Describe the behavior of the physical quantities on each element
· Connect (assemble) the elements at the nodes to form an approximate system of equations for the whole structure
· Solve the system of equations involving unknown quantities at the nodes (e.g., displacements)
· Calculate desired quantities (e.g., strains and stresses) at selected elements

Computer Implementations

· Preprocessing (build FE model, loads and constraints)
· FEA solver (assemble and solve the system of equations)
· Postprocessing (sort and display the results)

Available Commercial FEM Software Packages

· ANSYS(General purpose, PC and workstations)
· SDRC/I-DEAS(Complete CAD/CAM/CAE package)
· NASTRAN(General purpose FEA on mainframes)
· ABAQUS(Nonlinear and dynamic analyses)
· COSMOS(General purpose FEA)
· ALGOR (PC and workstations)
· PATRAN(Pre/Post Processor)
· HyperMesh(Pre/Post Processor)
· Dyna-3D(Crash/impact analysis)

Step by step process of FEM with the help of an Example click on the link below

Discretization Approaches used in Computational Fluid Dynamics


Types of Finite Elements

We only consider linear problems in this introduction.

Consider the equilibrium of forces for the spring. At node i, we have


k= (element) stiffness matrix
u= (element nodal) displacement vector
f= (element nodal) force vector
Note that k is symmetric. Is k singular or non-singular? That is, can we solve the equation? If not, why?

K is the stiffness matrix (structure matrix) for the spring system.
An alternative way of assembling the whole stiffness matrix: “ Enlarging” the stiffness matrices for elements 1 and 2, we have

Adding the two matrix equations (superposition), we have

This is the same equation we derived by using the force equilibrium concept.
Boundary and load conditions:
Assuming, u1=0 and F1= F2= P

Solving the equations, we obtain the displacements

and the reaction force
F1=-2 P

Checking the Results
· Deformed shape of the structure
· Balance of the external forces
· Order of magnitudes of the numbers
Notes About the Spring Elements
· Suitable for stiffness analysis
· Not suitable for stress analysis of the spring itself
· Can have spring elements with stiffness in the lateral direction, spring elements for torsion, etc.




Lecture Notes: Introduction to Finite Element Method by Yijun Liu, University of Cincinnati

Plate with a hole analysis : FEA Basics




Why projects fail?


Organizations perform two kinds of work: operational work and projects. Due to the repetitive nature of operational work, it is easier to systematize processes. However, because projects have finite start and end dates, are unique in nature, and involve mixed team players, they are more difficult to systematize and to develop sound methodologies and processes for.

Project Management Institute, Inc. (PMI)

There are many causes of project failure and every failed project will have its own set of issues. Sometimes it is a single trigger event that leads to failure, but more often than not, it is a complex entwined set of problems that combine and cumulatively result in failure. Generally these issues fall into two categories. Things the team did do (but did poorly) or things the team failed to do.




According to a survey carried out by the International Project Leadership Academy the following list documents 101 of the most common mistakes that lead to, or contribute to, the failure of projects:

Goal and vision

  1. Failure to understand the why behind the what results in a project delivering something that fails to meet the real needs of the organization (i.e. failure to ask or answer the question “what are we really trying to achieve?”)
  2. Failure to document the “why” into a succinct and clear vision that can be used to communicate the project’s goal to the organization and as a focal point for planning
  3. Project objectives are misaligned with the overall business goals and strategy of the organization as a whole (e.g. Sponsor has their own private agenda that is not aligned with the organization’s stated goals)
  4. Project defines its vision and goals, but the document is put on a shelf and never used as a guide for subsequent decision making
  5. Lack of coordination between multiple projects spread throughout the organization results in different projects being misaligned or potentially in conflict with each other.



Leadership and governance

  1. Failure to establish a governance structure appropriate to the needs of the project (classic mistake award winner)
  2. Appointing a Sponsor who fails to take ownership of the project seriously or who feels that the Project Manager is the only person responsible for making the project a success
  3. Appointing a Sponsor who lacks the experience, seniority, time or training to perform the role effectively
  4. Failure to establish effective leadership in one or more of the three leadership domains i.e. business, technical and organizational
  5. The Project Manager lacks the interpersonal or organizational skills to bring people together and make things happen
  6. Failure to find the right level of project oversight (e.g. either the Project Manager micromanages the project causing the team to become de-motivated or they fail to track things sufficiently closely allowing the project to run out of control).


Stakeholder engagement issues

  1. Failure to identify or engage the stakeholders (classic mistake award winner)
  2. Failing to view the project through the eyes of the stakeholders results in a failure to appreciate how the project will impact the stakeholders or how they will react to the project
  3. Imposing a solution or decision on stakeholders and failing to get their buy-in
  4. Allowing one stakeholder group to dominate the project while ignoring the needs of other less vocal groups
  5. Failure to include appropriate “change management” type activities into the scope of the project to ensure stakeholders are able to transition from old ways of working to the new ways introduced by the project
  6. Failure to establish effective communications between individuals, groups or organizations involved in the project (classic mistake award winner).



Team issues

  1. Lack of clear roles and responsibilities result in confusion, errors and omissions
  2. There are insufficient team members to complete the work that has been committed to
  3. Projects are done “off the side of the desk” (i.e. team members are expected to perform full time operational jobs while also meeting project milestones)
  4. The team lacks the Subject Matter Expertise needed to complete the project successfully
  5. Selecting the first available person to fill a role rather than waiting for the person who is best qualified
  6. Failure to provide team with appropriate training in either the technology in use, the processes the team will be using or the business domain in which the system will function
  7. Lack of feedback processes allows discontent in the team to simmer under the surface
  8. The Project Manager’s failure to address poor team dynamics or obvious non-performance of an individual team member results in the rest of the team becoming disengaged
  9. Practices that undermine team motivation
  10. Pushing a team that is already exhausted into doing even more overtime
  11. Adding more resources to an already late project causes addition strain on the leadership team resulting in even lower team performance (Brooks law).


Requirements Issues

  1. Lack of formality in the scope definition process results in vagueness and different people having different understandings of what is in and what is out of scope
  2. Vague or open ended requirements (such as requirements that end with “etc”)
  3. Failure to address excessive scope volatility or uncontrolled scope creep (classic mistake award winner)
  4. Failure to fully understand the operational context in which the product being produced needs to function once the project is over (classic mistake award winner)
  5. Requirements are defined by an intermediary without directly consulting or involving those who will eventually use the product being produced (see also lack of stakeholder engagement above)
  6. Individual requirements are never vetted against the project’s overall objectives to ensure each requirement supports the project’s objective and has a reasonable Return on Investment (ROI)
  7. The project requirements are written based on the assumption that everything will work as planned. Requirements to handle potential problems or more challenging situations that might occur are never considered
  8. Failure to broker agreement between stakeholders with differing perspectives or requirements.



  1. Those who will actually perform the work are excluded from the estimating process
  2. Estimates are arbitrarily cut in order to secure a contract or make a project more attractive
  3. Allowing a manager, sales agent or customer to bully the team into making unrealistic commitments
  4. Estimates are provided without a corresponding statement of scope
  5. Estimation is done based on insufficient information or analysis (rapid off-the-cuff estimates become firm commitments)
  6. Commitments are made to firm estimates, rather than using a range of values that encapsulate the unknowns in the estimate
  7. The assumptions used for estimating are never documented, discussed or validated
  8. Big ticket items are estimated, but because they are less visible, the smaller scale activities (the peanut list) are omitted
  9. Estimation is done without referring back to a repository of performance data culled from prior projects
  10. Failure to build in contingency to handle unknowns
  11. Assuming a new tool, process or system being used by the team will deliver instant productivity improvements.



  1. Failure to plan – diving into the performance and execution of work without first slowing down to think
  2. The underestimation of complexity (classic mistake award winner)
  3. Working under constant and excessive schedule pressure
  4. Assuming effort estimates can be directly equated to elapsed task durations without any buffers or room for non-productive time
  5. Failure to manage management or customer expectations
  6. Planning is seen as the Project Manager’s responsibility rather than a team activity
  7. Failure to break a large scale master plan into more manageable pieces that can be delivered incrementally
  8. Team commitments themselves to a schedule without first getting corresponding commitments from other groups and stakeholders who also have to commit to the schedule (aka schedule suicide)
  9. Unclear roles and responsibilities led to confusion and gaps
  10. Some team members are allowed to become overloaded resulting in degraded performance in critical areas of the project while others are underutilized
  11. Requirements are never prioritized resulting in team focusing energies on lower priority items instead of high priority work
  12. Failure to include appropriate culture change activities as part of the project plan (classic mistake award winner)
  13. Failure to provide sufficient user training when deploying the product produced by the project into its operational environment (classic mistake award winner)
  14. Failure to build training or ramp up time into the plan
  15. Change requests are handled informally without assessing their implications or agreeing to changes in schedule and budget.


Risk management

  1. Failure to think ahead and to foresee and address potential problems (Classic mistake award winner)
  2. Risk management is seen as an independent activity rather than an integral part of the planning process
  3. Risk, problems and issues become confused as a result team isn’t really doing risk management. 



Architecture and design

  1. Allowing a pet idea to become the chosen solution without considering if other solutions might better meet the project’s overall goal
  2. Teams starts developing individual components without first thinking through an overall architecture or how the different components will be integrated together. That lack of architecture then results in duplication of effort, gaps, unexpected integration costs and other inefficiencies
  3. Failure to take into account non-functional requirements when designing a product, system or process (especially performance requirements) results in a deliverable that is operationally unusable
  4. Poor architecture results in a system that is difficult to debug and maintain
  5. Being seduced into using leading edge technology where it is not needed or inappropriate
  6. Developer “gold plating” (developers implement the Rolls Royce version of a product when a Chevy was all that was needed)
  7. Trying to solve all problems with a specific tool simply because it is well understood rather than because it is well suited to the job in hand
  8. New tools are used by the project team without providing the team with adequate training or arranging for appropriate vendor support. 



Configuration and information management

  1. Failure to maintain control over document or component versions results in confusion over which is current, compatibility problems and other issues that disrupt progress
  2. Failure to put in place appropriate tools for organizing and managing information results in a loss of key information and/or a loss of control.


  1. Quality requirements are never discussed, thereby allowing different people to have different expectations of what is being produced and the standards to be achieved
  2. Failure to plan into the project appropriate reviews, tests or checkpoints at which quality can be verified
  3. Reviews of documents and design papers focus on spelling and grammar rather than on substantive issues
  4. Quality is viewed simply in terms of testing rather than a culture of working
  5. The team developing the project’s deliverables sees quality as the responsibility of the Quality Assurance group rather than a shared responsibility (the so called “throw it over the wall” mentality)
  6. Testing focuses on the simple test cases while ignore the more complex situations such as error and recovery handling when things go wrong
  7. Integration and testing of the individual components created in the project is left until all development activities are complete rather than doing ongoing incremental ingratiation and verification to find and fix problems early
  8. Testing in a test environment that is configured differently from the target production, or operational environment in which the project’s deliverables will be used.

Project tracking and management

  1. Believing that although the team is behind schedule, they will catch up later
  2. The project plan is published but there is insufficient follow up or tracking to allow issues to be surfaced and addressed early. Those failures result in delays and other knock-on problems
  3. Bad news is glossed over when presenting to customers, managers and stakeholders (aka “Green Shifting“)
  4. Dismissing information that might show that the project is running into difficulties (i.e. falling prey to the “confirmation bias”)
  5. Schedule and budget become the driving force, as a result corners are cut and quality is compromised (pressure to mark a task as complete results in quality problems remaining undetected or being ignored)
  6. Project is tracked based on large work items rather than smaller increments
  7. Failure to monitor sub-contractor or vendor performance on a regular basis
  8. Believing that a task reported by a team member as 90% done really is 90% done (note often that last 10% takes as long in calendar time as the first 90%)
  9. Believing that because a person was told something once (weeks or months ago), they will remember what they were asked to do and when they were supposed to do it (failure to put in place a system that ensures people are reminded of upcoming activities and commitments).

Decision making problems

  1. Key decisions (strategic, structural or architectural type decisions) are made by people who lack the subject matter expertise to be making the decision
  2. When making critical decisions expert advice is either ignored or simply never solicited
  3. Lack of “situational awareness” results in ineffective decisions being made
  4. Failure to bring closure to a critical decision results in wheel-spin and inaction over extended periods of time
  5. Team avoids the difficult decisions because some stakeholders maybe unhappy with the outcome
  6. Group decisions are made at the lowest common denominator rather than facilitating group decision making towards the best possible answer
  7. Key decisions are made without identifying or considering alternatives (aka “First Option Adoption“)
  8. Decision fragments are left unanswered (parts of the who, why, when, where and how components of a decision are made, but others are never finalized) resulting in confusion
  9. Failure to establish clear ownership of decisions or the process by which key decisions will be made results in indecision and confusion.




Subscribe and receive the following...

  • Blog and News updates
  • Forum Discussions
  • Tutorial alerts