Introduction to Robotics
Robotics is a relatively young field of modern technology that crosses traditional engineering boundaries. Understanding the complexity of robots and their applications requires knowledge of electrical engineering, mechanical engineering, systems and industrial engineering, computer science, economics, and mathematics. New disciplines of engineering, such as manufacturing engineering, applications engineering, and knowledge engineering have emerged to deal with the complexity of the field of robotics and factory automation.
The term robot was first introduced into our vocabulary by the Czech playwright Karel Capek in his 1920 play Rossum’s Universal Robots, the word robota being the Czech word for work. Since then the term has been applied to a great variety of mechanical devices, such as teleoperators, underwater vehicles, autonomous land rovers, etc. Virtually anything that operates with some degree of autonomy, usually under computer control, has at some point been called a robot. In this text the term robot will mean a computer controlled industrial manipulator of the type shown in Figure 1.1.
This type of robot is essentially a mechanical arm operating under computer control. Such devices, though far from the robots of science fiction, are nevertheless extremely complex electro-mechanical systems whose analytical description requires advanced methods, presenting many challenging and interesting research problems.
An official definition of such a robot comes from the Robot Institute of America (RIA): A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.
The key element in the above definition is the reprogrammability of robots. It is the computer brain that gives the robot its utility and adaptability. The so-called robotics revolution is, in fact, part of the larger computer revolution.
Even this restricted version of a robot has several features that make it attractive in an industrial environment. Among the advantages often cited in favor of the introduction of robots are decreased labor costs, increased precision and productivity, increased flexibility compared with specialized machines, and more humane working conditions as dull, repetitive, or hazardous jobs are performed by robots.
The robot, as we have defined it, was born out of the marriage of two earlier technologies: teleoperators and numerically controlled milling machines. Teleoperators, or master-slave devices, were developed during the second world war to handle radioactive materials. Computer numerical control (CNC) was developed because of the high precision required in the machining of certain items, such as components of high performance aircraft. The first robots essentially combined the mechanical linkages of the teleoperator with the autonomy and programmability of CNC machines.
The first successful applications of robot manipulators generally involved some sort of material transfer, such as injection molding or stamping, where the robot merely attends a press to unload and either transfer or stack the finished parts. These first robots could be programmed to execute a sequence of movements, such as moving to a location A, closing a gripper, moving to a location B, etc., but had no external sensor capability. More complex applications, such as welding, grinding, deburring, and assembly require not only more complex motion but also some form of external sensing such as vision, tactile, or force-sensing, due to the increased interaction of the robot with its environment.
It should be pointed out that the important applications of robots are by no means limited to those industrial jobs where the robot is directly replacing a human worker. There are many other applications of robotics in areas where the use of humans is impractical or undesirable. Among these are undersea and planetary exploration, satellite retrieval and repair, the defusing of explosive devices, and work in radioactive environments. Finally, prostheses, such as artificial limbs, are themselves robotic devices requiring methods of analysis and design similar to those of industrial manipulators.
Classification of Robotic Manipulators
Robot manipulators can be classified by several criteria, such as their power source, or way in which the joints are actuated, their geometry, or kinematic structure, their intended application area, or their method of control. Such classification is useful primarily in order to determine which robot is right for a given task. For example, a hydraulic robot would not be suitable for food handling or clean room applications. We explain this in more detail below.
Power Source. Typically, robots are either electrically, hydraulically, or pneumatically powered. Hydraulic actuators are unrivaled in their speed of response and torque producing capability. Therefore, hydraulic robots are used primarily for lifting heavy loads. The drawbacks of hydraulic robots are that they tend to leak hydraulic fluid, require much more peripheral equipment (such as pumps, which require more maintenance), and they are noisy. Robots driven by DC- or AC-servo motors are increasingly popular since they are cheaper, cleaner and quieter. Pneumatic robots are inexpensive and simple but cannot be controlled precisely. As a result, pneumatic robots are limited in their range of applications and popularity.
Application Area. Robots are often classified by application into assembly and non-assembly robots. Assembly robots tend to be small, electrically driven and either revolute or SCARA (described below) in design. The main non-assembly application areas to date have been in welding, spray painting, material handling, and machine loading and unloading.
Method of Control. Robots are classified by control method into servo and non-servo robots. The earliest robots were non-servo robots. These robots are essentially open-loop devices whose movement is limited to predetermined mechanical stops, and they are useful primarily for materials transfer. In fact, according to the definition given previously, fixed stop robots hardly qualify as robots. Servo robots use closed-loop computer control to determine their motion and are thus capable of being truly multifunctional, reprogrammable devices.
Servo controlled robots are further classified according to the method that the controller uses to guide the end-effector. The simplest type of robot in this class is the point-to-point robot. A point-to-point robot can be taught a discrete set of points but there is no control on the path of the end-effector in between taught points. Such robots are usually taught a series of points with a teach pendant. The points are then stored and played back. Point-to-point robots are severely limited in their range of applications. In continuous path robots, on the other hand, the entire path of the end-effector can be controlled. For example, the robot end-effector can be taught to follow a straight line between two points or even to follow a contour such as a welding seam. In addition, the velocity and/or acceleration of the end-effector can often be controlled. These are the most advanced robots and require the most sophisticated computer controllers and software development.
Geometry. Most industrial manipulators at the present time have six or fewer degrees-of-freedom. These manipulators are usually classified kinematically on the basis of the first three joints of the arm, with the wrist being described separately. The majority of these manipulators fall into one of five geometric types: articulated (RRR), spherical (RRP), SCARA (RRP), cylindrical (RPP), or Cartesian (PPP).
Each of these five manipulator arms are serial link robots. A sixth distinct class of manipulators consists of the so-called parallel robot. In a parallel manipulator the links are arranged in a closed rather than open kinematic chain.
A robot manipulator should be viewed as more than just a series of mechanical linkages. The mechanical arm is just one component in an overall Robotic System, illustrated in Figure 1.3, which consists of the arm, external power source, end-of-arm tooling, external and internal sensors, computer interface, and control computer.
Even the programmed software should be considered as an integral part of the overall system, since the manner in which the robot is programmed and controlled can have a major impact on its performance and subsequent range of applications.
Accuracy and Repeatability
The accuracy of a manipulator is a measure of how close the manipulator can come to a given point within its workspace. Repeatability is a measure of how close a manipulator can return to a previously taught point. The primary method of sensing positioning errors in most cases is with position encoders located at the joints, either on the shaft of the motor that actuates the joint or on the joint itself. There is typically no direct measurement of the end-effector position and orientation. One must rely on the assumed geometry of the manipulator and its rigidity to infer (i.e., to calculate) the end-effector position from the measured joint positions. Accuracy is affected therefore by computational errors, machining accuracy in the construction of the manipulator, flexibility effects such as the bending of the links under gravitational and other loads, ear backlash, and a host of other static and dynamic effects. It is primarily for this reason that robots are designed with extremely high rigidity. Without high rigidity, accuracy can only be improved by some sort of direct sensing of the end-effector position, such as with vision.
Once a point is taught to the manipulator, however, say with a teach pendant, the above effects are taken into account and the proper encoder values necessary to return to the given point are stored by the controlling computer. Repeatability therefore is affected primarily by the controller resolution. Controller resolution means the smallest increment of motion that the controller can sense. The resolution is computed as the total distance traveled by the tip divided by 2n, where n is the number of bits of encoder accuracy. In this context, linear axes, that is, prismatic joints, typically have higher resolution than revolute joints, since the straight-line distance traversed by the tip of a linear axis between two points is less than the corresponding arc length traced by the tip of a rotational link.
In addition, rotational axes usually result in a large amount of kinematic and dynamic coupling among the links with a resultant accumulation of errors and a more difficult control problem. One may wonder then what the advantages of revolute joints are in manipulator design.
The answer lies primarily in the increased dexterity and compactness of revolute joint designs. For example, Figure 1.4 shows that for the same range of motion, a rotational link can be made much smaller than a link with linear motion. Thus, manipulators made from revolute joints occupy a smaller working volume than manipulators with linear axes. This increases the ability of the manipulator to work in the same space with other robots, machines, and people. At the same time revolute joint manipulators are better able to maneuver around obstacles and have a wider range of possible applications.
Wrists and End-Effectors
The joints in the kinematic chain between the arm and end effector are referred to as the wrist. The wrist joints are nearly always all revolute. It is increasingly common to design manipulators with spherical wrists, by which we mean wrists whose three joint axes intersect at a common point. The spherical wrist is represented symbolically in Figure 1.5.
The spherical wrist greatly simplifies the kinematic analysis, effectively allowing one to decouple the positioning and orientation of the end effector. Typically, therefore, the manipulator will possess three degrees-of-freedom for position, which are produced by three or more joints in the arm. The number of degrees-of-freedom for orientation will then depend on the degrees-of-freedom of the wrist. It is common to find wrists having one, two, or three degrees-of-freedom depending of the application. For example, the SCARA robot shown in Figure 1.14 has four degrees-of-freedom: three for the arm, and one for the wrist, which has only a rotation about the final z-axis.
It has been said that a robot is only as good as its hand or end-effector. The arm and wrist assemblies of a robot are used primarily for positioning the end-effector and any tool it may carry. It is the end-effector or tool that actually performs the work. The simplest type of end-effectors are grippers, which usually are capable of only two actions, opening and closing. While this is adequate for materials transfer, some parts handling, or gripping simple tools, it is not adequate for other tasks such as welding, assembly, grinding, etc. A great deal of research is therefore devoted to the design of special purpose end-effectors as well as to tools that can be rapidly changed as the task dictates. There is also much research on the development of anthropomorphic hands. Such hands have been developed both for prosthetic use and for use in manufacturing.
In theory numerical results may be obtained that are indistinguishable from the ‘exact’ solution of the transport equation when the number of computational cells is infinitely large, irrespective of the differencing method used. However, in practical calculations we can only use a finite – sometimes quite small – number of cells, and our numerical results will only be physically realistic when the discretisation scheme has certain fundamental properties.
- Finite size of control volume introduce numerical issues.
- In order to analyze numerical errors, the numerical discretization scheme are tested for the following three properties.
Integration of the convection–diffusion equation over a finite number of control volumes yields a set of discretised conservation equations involving fluxes of the transported property φ through control volume faces. To ensure conservation of φfor the whole solution domain the flux of φ leaving a control volume across a certain face must be equal to the flux of φentering the adjacent control volume through the same face. To achieve this the flux through a common face must be represented in a consistent manner – by one and the same expression – in adjacent control volumes.
- To ensure conservation of φ for the whole solution domain, the flux of φ leaving a control volume across a certain face must be equal to the flux of φ entering the adjacent control volume through the same face.
- For example, consider the one-dimensional steady state diffusion problem without source terms shown below:
The fluxes across the domain boundaries are denoted by qA and qB. Let us consider four control volumes and apply central differencing to calculate the diffusive flux across the cell faces. The expression for the flux leaving the element around node 2 across its west face is Γw2 (φ2 −φ1)/δx and the flux entering across its east face is Γe2(φ3−φ2)/δx. An overall flux balance may be obtained by summing the net flux through each control volume, taking into account the boundary fluxes for the control volumes around nodes 1 and 4:
Since Γe1=Γw2, Γe2=Γw3 and Γe3=Γw4 the fluxes across control volume faces are expressed in a consistent manner and cancel out in pairs when summed over the entire domain. Only the two boundary fluxes qA and qB remain in the overall balance, so above equation expresses overall conservation of property φ. Flux consistency ensures conservation of φ over the entire domain for the central difference formulation of the diffusion flux.
Inconsistent flux interpolation formulae give rise to unsuitable schemes that do not satisfy overall conservation. For example, let us consider the situation where a quadratic interpolation formula, based on values at 1, 2 and
3, is used for control volume 2, and a quadratic profile, based on values at points 2, 3 and 4, is used for control volume 3.
- Flux consistency ensures conservation of φ over the entire domain for the central difference formulation of the diffusion flux.
- Inconsistent flux interpolation schemes are pron to numerical errors such as quadratic interpolation formula.
As shown in Figure below, the resulting quadratic profiles can be quite different.
Consequently, the flux values calculated at the east face of control volume 2 and the west face of control volume 3 may be unequal if the gradients of the two curves are different at the cell face. If this is the case the two fluxes do
not cancel out when summed and overall conservation is not satisfied. The example should not suggest to the reader that quadratic interpolation is entirely bad. Further on we will meet a quadratic discretisation practice, the so-called QUICK scheme, that is consistent.
The discretised equations at each nodal point represent a set of algebraic equations that needs to be solved. Normally iterative numerical techniques are used to solve large equation sets. These methods start the solution
process from a guessed distribution of the variable φ and perform successive updates until a converged solution is obtained. Scarborough (1958) has shown that a sufficient condition for a convergent iterative method can be expressed in terms of the values of the coefficients of the discretised equations:
- For iterative solvers, the matrix must be diagonally dominant.
- All coefficients of the discretized equations should have the same sign (usually all positive).
Here a′P is the net coefficient of the central node P(i.e. aP−SP), and the summation in the numerator is taken over all the neighbouring nodes (nb). If the differencing scheme produces coefficients that satisfy the above criterion
the resulting matrix of coefficients is diagonally dominant. To achieve diagonal dominance we need large values of net coefficient (aP−SP) so the linearisation practice of source terms should ensure that SP is always negative. If this is the case −SP is always positive and adds to aP.
Diagonal dominance is a desirable feature for satisfying the ‘boundedness’ criterion. This states that in the absence of sources the internal nodal values of property φ should be bounded by its boundary values. Hence
in a steady state conduction problem without sources and with boundary temperatures of 500°C and 200°C, all interior values of T should be less than 500°C and greater than 200°C. Another essential requirement for boundedness is that all coefficients of the discretised equations should have the same sign(usually all positive). Physically this implies that an increase in the variable φ at one node should result in an increase in φ at neighbouring nodes. If the discretisation scheme does not satisfy the boundedness requirements it is possible that the solution does not converge at all, or, if it does, that it contains ‘wiggles’.
The transportiveness property of a fluid flow (Roache, 1976) can be illustrated by considering the effect at a point P due to two constant sources of φ at nearby points W and E on either side as shown in Figure below. We define the non-dimensional cell Peclet number as a measure of the relative strengths of convection and diffusion:
- Relative strength of convection and diffusion is measured by Peclet number.
The lines in Figure below indicate the general shape of contours of constant φ (say φ=1) due to both sources for different values of Pe. The value of φ at any point can be thought of as the sum of contributions due to the two
- Consider the effect at a point P due to two constant sources of φ at nearby points W and E on either side
Let us consider two extreme cases to identify the extent of the influence at node P due to the sources at Wand E:
• no convection and pure diffusion (Pe→0)
• no diffusion and pure convection (Pe→∞)
In the case of pure diffusion the fluid is stagnant (Pe→0) and the contours of constant φ will be concentric circles centred around W and E since the diffusion process tends to spread φ equally in all directions. Figure (a) shows that both φ=1 contours pass through P, indicating that conditions at this point are influenced by both sources at W and E. As Pe increases the contours change shape from circular to elliptical and are shifted in the direction of the flow as shown in Figure (b). Influencing becomes increasingly biased towards the upstream direction at large values of Pe, so, in the present case where the flow is in the positive x-direction, conditions at P will be mainly influenced by the upstream source at W. In the case of pure convection (Pe→∞) the elliptical contours are completely stretched out in the flow direction. All of property φ emanating from the sources at W and E is immediately transported downstream. Thus, conditions at P are now unaffected by the downstream source at E and completely dictated by the upstream source at W. Since there is no diffusion φP is equal to φW. If the flow is in the negative x-direction we would find that φP is equal to φE. It is very important that the relationship between the directionality of influencing and the flow direction and magnitude of the Peclet number, known as the transportiveness, is borne out in the discretisation scheme.
- An Introduction to Computational Fluid Dynamics, THE FINITE VOLUME METHOD, Second Edition
H K Versteeg and W Malalasekera.
- Computational Fluid Dynamics-I, Lecture-9 by Dr. Tariq Talha, College of EME, NUST, Pakistan.
Many of the products in microsystem technology are based on silicon, and most of the processing techniques used in the fabrication of microsystems are borrowed from the microelectronics industry. There are several important reasons why silicon is a desirable material in MST: (1) the microdevices in MST often include electronic circuits, so both the circuit and the microdevice can be fabricated in combination on the same substrate; (2) in addition to its desirable electronic properties, silicon also possesses useful mechanical properties, such as high strength and elasticity, good hardness, and relatively low density (3) the technologies for processing silicon are well-established, owing to their widespread use in microelectronics; and (4) use of single-crystal silicon permits the production of physical features to very close tolerances.
Microsystem technology often requires silicon to be fabricated along with other materials in order to obtain a particular microdevice. For example, microactuators often consist of several components made of different materials. Accordingly, microfabrication techniques consist of more than just silicon processing.
Microfabrication is actually a collection of technologies which are utilized in making microdevices. Some of them have very old origins, not connected to manufacturing, like lithography or etching. Polishing was borrowed from optics manufacturing, and many of the vacuum techniques come from 19th century physics research. Electroplating is also a 19th-century technique adapted to produce micrometer scale structures, as are various stamping and embossing techniques.
To fabricate a microdevice, many processes must be performed, one after the other, many times repeatedly. These processes typically include depositing a film, patterning the film with the desired micro features, and removing (or etching) portions of the film. Thin film metrology is used typically during each of these individual process steps, to ensure the film structure has the desired characteristics in terms of thickness (t), refractive index (n) and extinction coefficient (k), for suitable device behavior. For example, in memory chip fabrication there are some 30 lithography steps, 10 oxidation steps, 20 etching steps, 10 doping steps, and many others are performed. The complexity of microfabrication processes can be described by their mask count. This is the number of different pattern layers that constitute the final device. Modern microprocessors are made with 30 masks while a few masks suffice for a microfluidic device or a laser diode. Microfabrication resembles multiple exposure photography, with many patterns aligned to each other to create the final structure.
Microfabricated devices are not generally freestanding devices but are usually formed over or in a thicker support substrate. For electronic applications, semiconducting substrates such as silicon wafers can be used. For optical devices or flat panel displays, transparent substrates such as glass or quartz are common. The substrate enables easy handling of the micro device through the many fabrication steps. Often many individual devices are made together on one substrate and then singulated into separated devices toward the end of fabrication.
Deposition or Growth
Microfabricated devices are typically constructed using one or more thin films. The purpose of these thin films depends on the type of device. Electronic devices may have thin films which are conductors (metals), insulators (dielectrics) or semiconductors. Optical devices may have films which are reflective, transparent, light guiding or scattering. Films may also have a chemical or mechanical purpose as well as for MEMS applications. Examples of deposition techniques include:
- Thermal oxidation
- chemical vapor deposition (CVD)
- Physical vapor deposition(PVD)
- evaporative deposition
- Electron beam PVD
It is often desirable to pattern a film into distinct features or to form openings in some of the layers. These features are on the micrometer or nanometer scale and the patterning technology is what defines microfabrication. The patterning technique typically uses a ‘mask’ to define portions of the film which will be removed. Examples of patterning techniques include:
- Shadow Masking
The core techniques of soft lithography.
Etching is the removal of some portion of the thin film or substrate. The substrate is exposed to an etching (such as an acid or plasma) which chemically or physically attacks the film until it is removed. Etching techniques include:
- Dry etching (Plasma etching) such as Reactive-ion etching (RIE) or Deep reactive-ion etching (DRIE)
- Wet etching or Chemical Etching
Flow of the etching process
Microforming is a microfabrication process of microsystem or microelectromechanical system (MEMS) “parts or structures with at least two dimensions in the submillimeter range.” It includes techniques such as microextrusion, microstamping, and microcutting. These and other microforming processes have been envisioned and researched since at least 1990, leading to the development of industrial and experimental-grade manufacturing tools. However, as Fu and Chan pointed out in a 2013 state-of-the-art technology review, several issues must still be resolved before the technology can be implemented more widely, including deformation load and defects, forming system stability, mechanical properties, and other size-related effects on the crystallite (grain) structure and boundaries:
In microforming, the ratio of the total surface area of grain boundaries to the material volume decreases with the decrease of specimen size and the increase of grain size. This leads to the decrease of grain boundary strengthening effect. Surface grains have lesser constraints compared to internal grains. The change of flow stress with part geometry size is partly attributed to the change of volume fraction of surface grains. In addition, the anisotropic properties of each grain become significant with the decrease of workpiece size, which results in the inhomogeneous deformation, irregular formed geometry and the variation of deformation load. There is a critical need to establish the systematic knowledge of microforming to support the design of part, process, and tooling with the consideration of size effects.
A wide variety of other processes for cleaning, planarizing, or modifying the chemical properties of microfabricated devices can also be performed. Some examples include:
- Doping by either thermal diffusion or ion implantation
- Chemical-mechanical planarization (CMP)
- Wafer cleaning, also known as “surface preparation”
- Wire bonding
Cleanliness in wafer fabrication
Microfabrication is carried out in cleanrooms, where air has been filtered of particle contamination and temperature, humidity, vibrations and electrical disturbances are under stringent control. Smoke, dust, bacteria and cells are micrometers in size, and their presence will destroy the functionality of a microfabricated device.
Cleanrooms provide passive cleanliness but the wafers are also actively cleaned before every critical step. RCA-1 clean in ammonia-peroxide solution removes organic contamination and particles; RCA-2 cleaning in hydrogen chloride-peroxide mixture removes metallic impurities. Sulfuric acid-peroxide mixture (a.k.a. Piranha) removes organics. Hydrogen fluoride removes native oxide from silicon surface. These are all wet cleaning steps in solutions. Dry cleaning methods include oxygen and argon plasma treatments to remove unwanted surface layers, or hydrogen bake at elevated temperature to remove native oxide before epitaxy. Pre-gate cleaning is the most critical cleaning step in CMOS fabrication: it ensures that the ca. 2 nm thick oxide of a MOS transistor can be grown in an orderly fashion. Oxidation, and all high temperature steps are very sensitive to contamination, and cleaning steps must precede high temperature steps.
Wafer Fabrication Process Steps
Surface preparation is just a different viewpoint, all the steps are the same as described above: it is about leaving the wafer surface in a controlled and well known state before you start processing. Wafers are contaminated by previous process steps (e.g. metals bombarded from chamber walls by energetic ions during ion implantation), or they may have gathered polymers from wafer boxes, and this might be different depending on wait time.
Wafer cleaning and surface preparation work a little bit like the machines in a bowling alley: first they remove all unwanted bits and pieces, and then they reconstruct the desired pattern so that the game can go on.
Fields of Use
Microfabricated devices include:
- Fabrication of integrated circuits (“microchips”)
- Microelectromechanical systems (MEMS), MOEMS,
- microfluidic devices (ink jet print heads)
- solar cells
- Flat Panel Displays
- Sensors (micro-sensors) (biosensors, nanosensors)
- Power MEMSs, fuel cells, energy harvesters/scavengers
Note: If you have any questions, Please visit our forum
For years, dark matter has been behaving badly. The term was first invoked nearly 80 years ago by the astronomer Fritz Zwicky, who realized that some unseen gravitational force was needed to stop individual galaxies from escaping giant galaxy clusters. Later, Vera Rubin and Kent Ford used unseen dark matter to explain why galaxies themselves don’t fly apart.
Yet even though we use the term “dark matter” to describe these two situations, it’s not clear that the same kind of stuff is at work. The simplest and most popular model holds that dark matter is made of weakly interacting particles that move about slowly under the force of gravity. This so-called “cold” dark matter accurately describes large-scale structures like galaxy clusters. However, it doesn’t do a great job at predicting the rotation curves of individual galaxies. Dark matter seems to act differently at this scale.
MARKOS KAY/QUANTA MAGAZINE
In the latest effort to resolve this conundrum, two physicists have proposed that dark matter is capable of changing phases at different size scales. Justin Khoury, a physicist at the University of Pennsylvania, and his former postdoc Lasha Berezhiani, who is now at Princeton University, say that in the cold, dense environment of the galactic halo, dark matter condenses into a superfluid—an exotic quantum state of matter that has zero viscosity. If dark matter forms a superfluid at the galactic scale, it could give rise to a new force that would account for the observations that don’t fit the cold dark matter model. Yet at the scale of galaxy clusters, the special conditions required for a superfluid state to form don’t exist; here, dark matter behaves like conventional cold dark matter.
“It’s a neat idea,” said Tim Tait, a particle physicist at the University of California, Irvine. “You get to have two different kinds of dark matter described by one thing.” And that neat idea may soon be testable. Although other physicists have toyed with similar ideas, Khoury and Berezhiani are nearing the point where they can extract testable predictions that would allow astronomers to explore whether our galaxy is swimming in a superfluid sea.
Here on Earth, superfluids aren’t exactly commonplace. But physicists have been cooking them up in their labs since 1938. Cool down particles to sufficiently low temperatures and their quantum nature will start to emerge. Their matter waves will spread out and overlap with one other, eventually coordinating themselves to behave as if they were one big “superatom.” They will become coherent, much like the light particles in a laser all have the same energy and vibrate as one. These days even undergraduates create so-called Bose-Einstein condensates (BECs) in the lab, many of which can be classified as superfluids.
Superfluids don’t exist in the everyday world—it’s too warm for the necessary quantum effects to hold sway. Because of that, “probably ten years ago, people would have balked at this idea and just said ‘this is impossible,’” said Tait. But recently, more physicists have warmed to the possibility of superfluid phases forming naturally in the extreme conditions of space. Superfluids may exist inside neutron stars, and some researchers have speculated that space-time itself may be a superfluid. So why shouldn’t dark matter have a superfluid phase, too?
To make a superfluid out of a collection of particles, you need to do two things: Pack the particles together at very high densities and cool them down to extremely low temperatures. In the lab, physicists (or undergraduates) confine the particles in an electromagnetic trap, then zap them with lasers to remove the kinetic energy and lower the temperature to just above absolute zero.
Read More on…. https://www.wired.com/story/this-dark-matter-theory-could-solve-a-celestial-conundrum/
What is Shape Memory Alloy
Shape Memory Alloys (SMA’s) are novel materials that have the ability to return to a predetermined shape when heated. When a Shape Memory Alloy is cold, or below its transformation temperature, it has a very low yield strength and can be deformed quite easily into any new shape-which it will retain. However, when the material is heated above its transformation temperature it undergoes a change in crystal structure, which causes it to return to its original shape.
Shape Memory Effect
At a low temperature, a SMA can be seemingly “plastically” deformed, but this “plastic” strain can be recovered by increasing the temperature. This is called the Shape Memory Effect (SME). At a high temperature, a large deformation can be recovered simply by releasing the applied force. This behavior is known as Superelasticity (SE).
Fig 1. (a) Shape Memory Effect and (b) Superelasticity
Definition of a Shape Memory Alloy
Shape Memory Alloys (SMAs) are a unique class of metal alloys that can recover apparent permanent strains when they are heated above a certain temperature.
The SMAs have two stable phases – the high-temperature phase, called austenite and the low-temperature phase, called martensite. In addition, the martensite can be in one of two forms: twinned and detwinned, as shown in Figure 2. A phase transformation which occurs between these two phases upon heating/cooling is the basis for the unique properties of the SMAs. The key effects of SMAs associated with the phase transformation are pseudoelasticity and shape memory effect.
Figure 2. Different phases of an SMA.
Upon cooling in the absence of applied load the material transforms from austenite into twinned (self-accommodated) martensite. As a result of this phase transformation no observable macroscopic shape change occurs. Upon heating the material in the martensitic phase, a reverse phase transformation takes place and as a result the material transforms to austenite. The above process is shown in Figure 3.
Figure 3. Temperature-induced phase transformation of an SMA without mechanical loading.
Four characteristic temperatures are defined in Figure 3: martensitic start temperature (M0s) which is the temperature at which the material starts transforming from austenite to martensite; martensitic finish temperature (M0f), at which the transformation is complete and the material is fully in the martensitic phase; austenite start temperature (Aos) at which the reverse transformation (austenite to martensite) initiates; and austenite finish temperature (Aof) at which the reverse phase transformation is completed and the material is the austenitic phase.
Thermally-Induced Transformation with Applied Mechanical Load
If mechanical load is applied to the material in the state of twinned martensite (at low temperature) it is possible to detwin the martensite. Upon releasing of the load, the material remains deformed. A subsequent heating of the material to a temperature above A0f will result in reverse phase transformation (martensite to austenite) and will lead to complete shape recovery, as shown in Figure 4. This process results in manifestation of the Shape Memory Effect (SME).
Figure 4. Shape Memory Effect of an SMA.
It is also possible to induce a martensitic transformation which would lead directly to detwinned martensite. If load is applied in the austenitic phase and the material is cooled, the phase transformation will result in detwinned martensite. Thus, very large strains (on the order of 5-8%) will be observed.
Reheating the material will result in complete shape recovery. The above-described loading path is shown in Figure 5. The transformation temperatures in this case strongly depend on the magnitude of the applied load. Higher values of the applied load will lead to higher values of the transformation temperatures. Usually a linear relationship between the applied load and the transformation temperatures is assumed, as shown in Figure 5.
Figure 5. Temperature-induced phase transformation with applied load.
It is also possible to induce a phase transformation by applying a pure mechanical load. The result of this load application is fully detwinned martensite and very large strains are observed. If the temperature of the material is above A0f, a complete shape recovery is observed upon unloading, thus, the material behavior resembles elasticity. Thus the above-described effect is known under the name of Pseudoelastic Effect. A loading path demonstrating the pseudoelastic effect is schematically shown in Figure 6, while the resulting stress-strain diagram is shown in Figure 7.
Figure 6. Pseudoelastic loading path.
Figure 7. Pseudoelastic stress-strain diagram.
Various thermal actuators then came into existence as a part of electric appliances and automobile engineering: flaps in air conditioners, which charge the direction of airflow depending upon the temperature of the air, coffeemakers, rice cookers, drain systems for steam heaters in trains, outer vent control system to avoid fuel evaporation in automobiles, and devices to open parallel hydraulic channels in automatic transmissions.
Among there, the application of SMAs to air-conditioner flaps by Matsushita Electric Co. was the most successful, replacing the ordinary sensor/integrated-circuit/relay/motor system with a simple combination of a SMA spring and a bias spring. More than simple device have been sold.
Let us see how a thermal actuator works, using as an example the recently developed thermostatic mixing valve shown in figure 8. In the application of SMAs to a thermal actuator, there are two basic components, a temperature-sensitive SMA spring and a temperature-insensitive bias spring, both of which are set in series(fig 8 (a)) and thus resist each other. Usually the SMA spring is harder than the bias spring in the parent phase and softer than the bias spring in the martensitic state. Thus, when the temperature is too high, the SMA spring is stronger than the bias one, and the opening for hot water becomes smaller than that for cold water.
Figure 8. Application of the SMAs
Although there are many SMAs, such as Ti-Ni, Cu-Al-Ni, Cu-Zn-Al, Au-Cd, Mn-Cu, Ni-Mn-Ga, and Fe-based alloys most of the practical SMAs are Ti-Ni-based alloys, since other SMAs are usually not ductile(or not ductile enough) or are of low strength and exhibit grain-boundary fracture. Ti-Ni-based alloys are superior to other SMAs in many respects. They exhibit 50~60% elongation and tensile strength as high as 1000 Mpa. To our knowledge, they possess the best mechanical properties among intermetallics and can be used as structural materials as well. They also have a very high resistance to corrosion and abrasion.
What is Pressure vessel?
A pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure.
The pressure differential is dangerous, and fatal accidents have occurred in the history of pressure vessel development and operation. Consequently, pressure vessel design, manufacture, and operation are regulated by engineering authorities backed by legislation. For these reasons, the definition of a pressure vessel varies from country to country.
Horizontal pressure vessel (steel)
Design involves parameters such as maximum safe operating pressure and temperature, safety factor, corrosion allowance and minimum design temperature (for brittle fracture). Construction is tested using nondestructive testing, such as ultrasonic testing, radiography, and pressure tests. Hydrostatic tests use water, but pneumatic tests use air or another gas. Hydrostatic testing is preferred, because it is a safer method, as much less energy is released if a fracture occurs during the test (water does not rapidly increase its volume when rapid depressurization occurs, unlike gases like air, which fail explosively).
In most countries, vessels over a certain size and pressure (15 PSI) must be built to a formal code. In the United States that code is the ASME Boiler and Pressure Vessel Code (BPVC). These vessels also require an authorized inspector to sign off on every new vessel constructed and each vessel has a nameplate with pertinent information about the vessel, such as maximum allowable working pressure, maximum temperature, minimum design metal temperature, what company manufactured it, the date, its registration number (through the National Board), and ASME’s official stamp for pressure vessels (U-stamp). The nameplate makes the vessel traceable and officially an ASME Code vessel.
Pressure vessel features
Shape of a pressure vessel
Pressure vessels can theoretically be almost any shape, but shapes made of sections of spheres, cylinders, and cones are usually employed. A common design is a cylinder with end caps called heads. Head shapes are frequently either hemispherical or dished (torispherical). More complicated shapes have historically been much harder to analyze for safe operation and are usually far more difficult to construct.
Cylindrical pressure vessel
Fire Extinguisher with rounded rectangle pressure vessel
an aerosol spray can
Spherical gas container
Theoretically, a spherical pressure vessel has approximately twice the strength of a cylindrical pressure vessel with the same wall thickness, and is the ideal shape to hold internal pressure. However, a spherical shape is difficult to manufacture, and therefore more expensive, so most pressure vessels are cylindrical with 2:1 semi-elliptical heads or end caps on each end. Smaller pressure vessels are assembled from a pipe and two covers. For cylindrical vessels with a diameter up to 600 mm (NPS of 24 in), it is possible to use seamless pipe for the shell, thus avoiding many inspection and testing issues, mainly the nondestructive examination of radiography for the long seam if required.
A disadvantage of these vessels is that greater diameters are more expensive, so that for example the most economic shape of a 1,000 litres (35 cu ft), 250 bars (3,600 psi) pressure vessel might be a diameter of 91.44 centimeters (36 in) and a length of 1.7018 meters (67 in) including the 2:1 semi-elliptical domed end caps.
Many pressure vessels are made of steel. To manufacture a cylindrical or spherical pressure vessel, rolled and possibly forged parts would have to be welded together. Some mechanical properties of steel, achieved by rolling or forging, could be adversely affected by welding, unless special precautions are taken. In addition to adequate mechanical strength, current standards dictate the use of steel with a high impact resistance, especially for vessels used in low temperatures. In applications where carbon steel would suffer corrosion, special corrosion resistant material should also be used.
Casing of the Altair rocket stage, essentially a fiberglass composite over-wrapped pressure vessel
Some pressure vessels are made of composite materials, such as filament wound composite using carbon fiber held in place with a polymer. Due to the very high tensile strength of carbon fiber these vessels can be very light, but are much more difficult to manufacture. The composite material may be wound around a metal liner, forming a composite over-wrapped pressure vessel.
Other very common materials include polymers such as PET in carbonated beverage containers and copper in plumbing.
Pressure vessels may be lined with various metals, ceramics, or polymers to prevent leaking and protect the structure of the vessel from the contained medium. This liner may also carry a significant portion of the pressure load.
Pressure Vessels may also be constructed from concrete (PCV) or other materials which are weak in tension. Cabling, wrapped around the vessel or within the wall or the vessel itself, provides the necessary tension to resist the internal pressure. A “leak-proof steel thin membrane” lines the internal wall of the vessel. Such vessels can be assembled from modular pieces and so have “no inherent size limitations”. There is also a high order of redundancy thanks to the large number of individual cables resisting the internal pressure.
Leak before burst
Leak before burst describes a pressure vessel designed such that a crack in the vessel will grow through the wall, allowing the contained fluid to escape and reducing the pressure, prior to growing so large as to cause fracture at the operating pressure.
Many pressure vessel standards, including the ASME Boiler and Pressure Vessel Code and the AIAA metallic pressure vessel standard, either require pressure vessel designs to be leak before burst, or require pressure vessels to meet more stringent requirements for fatigue and fracture if they are not shown to be leak before burst.
As the pressure vessel is designed to a pressure, there is typically a safety valve or relief valve to ensure that this pressure is not exceeded in operation.
Pressure vessel closures
Pressure vessel closures are pressure retaining structures designed to provide quick access to pipelines, pressure vessels, pig traps, filters and filtration systems. Typically pressure vessel closures allow maintenance personnel.
No matter what shape it takes, the minimum mass of a pressure vessel scales with the pressure and volume it contains and is inversely proportional to the strength to weight ratio of the construction material (minimum mass decreases as strength increases).
Scaling of stress in walls of vessel
Pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. The normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. Therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container.
Because (for a given pressure) the thickness of the walls scales with the radius of the tank, the mass of a tank (which scales as the length times radius times thickness of the wall for a cylindrical tank) scales with the volume of the gas held (which scales as length times radius squared). The exact formula varies with the tank shape but depends on the density, ρ, and maximum allowable stress σ of the material in addition to the pressure P and volume V of the vessel. (See below for the exact equations for the stress in the walls.)
For a sphere, the minimum mass of a pressure vessel is
M is mass, (kg)
P is the pressure difference from ambient (the gauge pressure), (Pa)
V is volume,
ρ is the density of the pressure vessel material, (kg/m^3)
σ is the maximum working stress that material can tolerate. (Pa)
Other shapes besides a sphere have constants larger than 3/2 (infinite cylinders take 2), although some tanks, such as non-spherical wound composite tanks can approach this.
Cylindrical vessel with hemispherical ends
This is sometimes called a “bullet” for its shape, although in geometric terms it is a capsule.
For a cylinder with hemispherical ends,
R is the radius (m)
W is the middle cylinder width only, and the overall width is W + 2R (m)
Cylindrical vessel with semi-elliptical ends
In a vessel with an aspect ratio of middle cylinder width to radius of 2:1,
In looking at the first equation, the factor PV, in SI units, is in units of pressure energy. For a stored gas, PV is proportional to the mass of gas at a given temperature, thus
The other factors are constant for a given vessel shape and material. So we can see that there is no theoretical “efficiency of scale”, in terms of the ratio of pressure vessel mass to pressure energy, or of pressure vessel mass to stored gas mass. For storing gases, “tankage efficiency” is independent of pressure, at least for the same temperature.
So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible ρ/σ, and very cold helium for best possible M/pV.
Stress in thin-walled pressure vessels
Stress in a shallow-walled pressure vessel in the shape of a sphere is
where σΘ is hoop stress, or stress in the circumferential direction, σlong is stress in the longitudinal direction, p is internal gauge pressure, r is the inner radius of the sphere, and t is thickness of the sphere wall. A vessel can be considered “shallow-walled” if the diameter is at least 10 times (sometimes cited as 20 times) greater than the wall depth.
Stress in a shallow-walled pressure vessel in the shape of a cylinder is
- σΘ is hoop stress, or stress in the circumferential direction
- σlong is stress in the longitudinal direction
- p is internal gauge pressure
- r is the inner radius of the cylinder
- t is thickness of the cylinder wall.
Almost all pressure vessel design standards contain variations of these two formulas with additional empirical terms to account for wall thickness tolerances, quality control of welds and in-service corrosion allowances.
For example, the ASME Boiler and Pressure Vessel Code (BPVC) (UG-27) formulas are:
where E is the joint efficient, and all others variables as stated above.
The factor of safety is often included in these formulas as well, in the case of the ASME BPVC this term is included in the material stress value when solving for pressure or thickness.
Winding angle of carbon fibre vessels
Wound infinite cylindrical shapes optimally take a winding angle of 54.7 degrees, as this gives the necessary twice the strength in the circumferential direction to the longitudinal.
Pressure vessels are used in a variety of applications in both industry and the private sector. They appear in these sectors as industrial compressed air receivers and domestic hot water storage tanks.
Other examples of pressure vessels are diving cylinders, recompression chambers, distillation towers, pressure reactors, autoclaves, and many other vessels in mining operations, oil refineries and petrochemical plants, nuclear reactor vessels, submarine and space ship habitats, pneumatic reservoirs, hydraulic reservoirs under pressure, rail vehicle airbrake reservoirs, road vehicle airbrake reservoirs, and storage vessels for liquified gases such as ammonia, chlorine, and LPG (propane, butane).
A unique application of a pressure vessel is the passenger cabin of an airliner; The outer skin carries both the aircraft maneuvering loads and the cabin pressurization loads.
- A.C. Ugural, S.K. Fenster, Advanced Strength and Applied Elasticity, 4th ed.
- E.P. Popov, Engineering Mechanics of Solids, 1st ed.
- Megyesy, Eugene F. “Pressure Vessel Handbook, 14th Edition.” PV Publishing, Inc. Oklahoma City, OK