Organizations perform two kinds of work: operational work and projects. Due to the repetitive nature of operational work, it is easier to systematize processes. However, because projects have finite start and end dates, are unique in nature, and involve mixed team players, they are more difficult to systematize and to develop sound methodologies and processes for.
Project Management Institute, Inc. (PMI)
There are many causes of project failure and every failed project will have its own set of issues. Sometimes it is a single trigger event that leads to failure, but more often than not, it is a complex entwined set of problems that combine and cumulatively result in failure. Generally these issues fall into two categories. Things the team did do (but did poorly) or things the team failed to do.
According to a survey carried out by the International Project Leadership Academy the following list documents 101 of the most common mistakes that lead to, or contribute to, the failure of projects:
Goal and vision
- Failure to understand the why behind the what results in a project delivering something that fails to meet the real needs of the organization (i.e. failure to ask or answer the question “what are we really trying to achieve?”)
- Failure to document the “why” into a succinct and clear vision that can be used to communicate the project’s goal to the organization and as a focal point for planning
- Project objectives are misaligned with the overall business goals and strategy of the organization as a whole (e.g. Sponsor has their own private agenda that is not aligned with the organization’s stated goals)
- Project defines its vision and goals, but the document is put on a shelf and never used as a guide for subsequent decision making
- Lack of coordination between multiple projects spread throughout the organization results in different projects being misaligned or potentially in conflict with each other.
Leadership and governance
- Failure to establish a governance structure appropriate to the needs of the project (classic mistake award winner)
- Appointing a Sponsor who fails to take ownership of the project seriously or who feels that the Project Manager is the only person responsible for making the project a success
- Appointing a Sponsor who lacks the experience, seniority, time or training to perform the role effectively
- Failure to establish effective leadership in one or more of the three leadership domains i.e. business, technical and organizational
- The Project Manager lacks the interpersonal or organizational skills to bring people together and make things happen
- Failure to find the right level of project oversight (e.g. either the Project Manager micromanages the project causing the team to become de-motivated or they fail to track things sufficiently closely allowing the project to run out of control).
Stakeholder engagement issues
- Failure to identify or engage the stakeholders (classic mistake award winner)
- Failing to view the project through the eyes of the stakeholders results in a failure to appreciate how the project will impact the stakeholders or how they will react to the project
- Imposing a solution or decision on stakeholders and failing to get their buy-in
- Allowing one stakeholder group to dominate the project while ignoring the needs of other less vocal groups
- Failure to include appropriate “change management” type activities into the scope of the project to ensure stakeholders are able to transition from old ways of working to the new ways introduced by the project
- Failure to establish effective communications between individuals, groups or organizations involved in the project (classic mistake award winner).
- Lack of clear roles and responsibilities result in confusion, errors and omissions
- There are insufficient team members to complete the work that has been committed to
- Projects are done “off the side of the desk” (i.e. team members are expected to perform full time operational jobs while also meeting project milestones)
- The team lacks the Subject Matter Expertise needed to complete the project successfully
- Selecting the first available person to fill a role rather than waiting for the person who is best qualified
- Failure to provide team with appropriate training in either the technology in use, the processes the team will be using or the business domain in which the system will function
- Lack of feedback processes allows discontent in the team to simmer under the surface
- The Project Manager’s failure to address poor team dynamics or obvious non-performance of an individual team member results in the rest of the team becoming disengaged
- Practices that undermine team motivation
- Pushing a team that is already exhausted into doing even more overtime
- Adding more resources to an already late project causes addition strain on the leadership team resulting in even lower team performance (Brooks law).
- Lack of formality in the scope definition process results in vagueness and different people having different understandings of what is in and what is out of scope
- Vague or open ended requirements (such as requirements that end with “etc”)
- Failure to address excessive scope volatility or uncontrolled scope creep (classic mistake award winner)
- Failure to fully understand the operational context in which the product being produced needs to function once the project is over (classic mistake award winner)
- Requirements are defined by an intermediary without directly consulting or involving those who will eventually use the product being produced (see also lack of stakeholder engagement above)
- Individual requirements are never vetted against the project’s overall objectives to ensure each requirement supports the project’s objective and has a reasonable Return on Investment (ROI)
- The project requirements are written based on the assumption that everything will work as planned. Requirements to handle potential problems or more challenging situations that might occur are never considered
- Failure to broker agreement between stakeholders with differing perspectives or requirements.
- Those who will actually perform the work are excluded from the estimating process
- Estimates are arbitrarily cut in order to secure a contract or make a project more attractive
- Allowing a manager, sales agent or customer to bully the team into making unrealistic commitments
- Estimates are provided without a corresponding statement of scope
- Estimation is done based on insufficient information or analysis (rapid off-the-cuff estimates become firm commitments)
- Commitments are made to firm estimates, rather than using a range of values that encapsulate the unknowns in the estimate
- The assumptions used for estimating are never documented, discussed or validated
- Big ticket items are estimated, but because they are less visible, the smaller scale activities (the peanut list) are omitted
- Estimation is done without referring back to a repository of performance data culled from prior projects
- Failure to build in contingency to handle unknowns
- Assuming a new tool, process or system being used by the team will deliver instant productivity improvements.
- Failure to plan – diving into the performance and execution of work without first slowing down to think
- The underestimation of complexity (classic mistake award winner)
- Working under constant and excessive schedule pressure
- Assuming effort estimates can be directly equated to elapsed task durations without any buffers or room for non-productive time
- Failure to manage management or customer expectations
- Planning is seen as the Project Manager’s responsibility rather than a team activity
- Failure to break a large scale master plan into more manageable pieces that can be delivered incrementally
- Team commitments themselves to a schedule without first getting corresponding commitments from other groups and stakeholders who also have to commit to the schedule (aka schedule suicide)
- Unclear roles and responsibilities led to confusion and gaps
- Some team members are allowed to become overloaded resulting in degraded performance in critical areas of the project while others are underutilized
- Requirements are never prioritized resulting in team focusing energies on lower priority items instead of high priority work
- Failure to include appropriate culture change activities as part of the project plan (classic mistake award winner)
- Failure to provide sufficient user training when deploying the product produced by the project into its operational environment (classic mistake award winner)
- Failure to build training or ramp up time into the plan
- Change requests are handled informally without assessing their implications or agreeing to changes in schedule and budget.
- Failure to think ahead and to foresee and address potential problems (Classic mistake award winner)
- Risk management is seen as an independent activity rather than an integral part of the planning process
- Risk, problems and issues become confused as a result team isn’t really doing risk management.
Architecture and design
- Allowing a pet idea to become the chosen solution without considering if other solutions might better meet the project’s overall goal
- Teams starts developing individual components without first thinking through an overall architecture or how the different components will be integrated together. That lack of architecture then results in duplication of effort, gaps, unexpected integration costs and other inefficiencies
- Failure to take into account non-functional requirements when designing a product, system or process (especially performance requirements) results in a deliverable that is operationally unusable
- Poor architecture results in a system that is difficult to debug and maintain
- Being seduced into using leading edge technology where it is not needed or inappropriate
- Developer “gold plating” (developers implement the Rolls Royce version of a product when a Chevy was all that was needed)
- Trying to solve all problems with a specific tool simply because it is well understood rather than because it is well suited to the job in hand
- New tools are used by the project team without providing the team with adequate training or arranging for appropriate vendor support.
Configuration and information management
- Failure to maintain control over document or component versions results in confusion over which is current, compatibility problems and other issues that disrupt progress
- Failure to put in place appropriate tools for organizing and managing information results in a loss of key information and/or a loss of control.
- Quality requirements are never discussed, thereby allowing different people to have different expectations of what is being produced and the standards to be achieved
- Failure to plan into the project appropriate reviews, tests or checkpoints at which quality can be verified
- Reviews of documents and design papers focus on spelling and grammar rather than on substantive issues
- Quality is viewed simply in terms of testing rather than a culture of working
- The team developing the project’s deliverables sees quality as the responsibility of the Quality Assurance group rather than a shared responsibility (the so called “throw it over the wall” mentality)
- Testing focuses on the simple test cases while ignore the more complex situations such as error and recovery handling when things go wrong
- Integration and testing of the individual components created in the project is left until all development activities are complete rather than doing ongoing incremental ingratiation and verification to find and fix problems early
- Testing in a test environment that is configured differently from the target production, or operational environment in which the project’s deliverables will be used.
Project tracking and management
- Believing that although the team is behind schedule, they will catch up later
- The project plan is published but there is insufficient follow up or tracking to allow issues to be surfaced and addressed early. Those failures result in delays and other knock-on problems
- Bad news is glossed over when presenting to customers, managers and stakeholders (aka “Green Shifting“)
- Dismissing information that might show that the project is running into difficulties (i.e. falling prey to the “confirmation bias”)
- Schedule and budget become the driving force, as a result corners are cut and quality is compromised (pressure to mark a task as complete results in quality problems remaining undetected or being ignored)
- Project is tracked based on large work items rather than smaller increments
- Failure to monitor sub-contractor or vendor performance on a regular basis
- Believing that a task reported by a team member as 90% done really is 90% done (note often that last 10% takes as long in calendar time as the first 90%)
- Believing that because a person was told something once (weeks or months ago), they will remember what they were asked to do and when they were supposed to do it (failure to put in place a system that ensures people are reminded of upcoming activities and commitments).
Decision making problems
- Key decisions (strategic, structural or architectural type decisions) are made by people who lack the subject matter expertise to be making the decision
- When making critical decisions expert advice is either ignored or simply never solicited
- Lack of “situational awareness” results in ineffective decisions being made
- Failure to bring closure to a critical decision results in wheel-spin and inaction over extended periods of time
- Team avoids the difficult decisions because some stakeholders maybe unhappy with the outcome
- Group decisions are made at the lowest common denominator rather than facilitating group decision making towards the best possible answer
- Key decisions are made without identifying or considering alternatives (aka “First Option Adoption“)
- Decision fragments are left unanswered (parts of the who, why, when, where and how components of a decision are made, but others are never finalized) resulting in confusion
- Failure to establish clear ownership of decisions or the process by which key decisions will be made results in indecision and confusion.
Stephen R. Covey has based his foundation for success on the character ethic–things like integrity, humility, fidelity, temperance, courage, justice, patience, industry, simplicity, modesty, and the Golden Rule. The personality ethic–personality growth, communication skill training, and education in the field of influence strategies and positive thinking is secondary to the character ethic. What we are communicates far more eloquently than what we say or do.
A paradigm is the way we perceive, understand and interpret the world around us. It is a difficult way of looking at people and things. To be effective we need to make a paradigm shift. Most scientific breakthroughs are the result of paradigm shifts such as Copernicus viewing the sun as the center of the universe rather than earth. Paradigm shifts are quantum changes, whether slow and deliberate or instantaneous.
A habit is the intersection of knowledge, skill, and desire. Knowledge is what to do and the why; skill is the how to do; and desire is the motivation or want to do. In order for something to become a habit you have to have all three. The seven habits are a highly integrated approach that moves from dependency (you take care of me) to independence (I take care of myself) to interdependence (we can do something better together). The first three habits deal with independence, the essence of character growth. Habit 4, 5, and 6 deal with interdependence, teamwork, cooperation, and communication. Habit 7 is the habit of renewal.
The 7 habits are in harmony with a natural law that covey calls the “P/PC Balance,”* where P stands for production of desired results and PC stands for production capacity, the ability or asset. For example, if you fail to maintain a lawn mower (PC) it will wear out and not be able to mow the lawn (P). you need a balance between the time spent mowing the lawn (desired result) and maintaining the lawn mower (asset). Assets can be physical, such as the lawn mower example; financial, such as the balance between principal (PC) and interest (P); and human, such as the balance between training (PC) and meeting schedule (P). You need the balance to be effective; otherwise, you will have neither a lawn mower nor a mowed lawn.
Habit 1: Be Proactive
Being proactive means taking responsibility for your life, the ability to choose the response to a situation. Proactive behavior is a product of conscious choice based on values, rather than reactive behavior, which is based on feelings. Reactive people let circumstances, conditions, or their environment tell them how to respond. Proactive people let carefully thought-about, selected, and internalized values tell them how to respond. It’s not what happens to us but our response that differentiates the two behaviors. No one can make you miserable unless you choose to let them. The language we use is a real indicator of our behavior.
Habit 2: begin with the end in mind
The most fundamental application of this habit is to begin each day with an image, picture, or paradigm of the end of your life as your frame of reference. Each part of your life can be examined in terms of what really matters to you, a vision of your life as a whole.
All things are created twice; there is a mental or first creation and a physical or second creation to all things. To build a house you first create a blue print and then you construct the actual house. You create a speech on paper before you give it. If you want to have a successful organization you begin with a plan that will produce appropriate end; thus leadership is the first creation, and management, the second. The leadership is doing the right things and management is doing things right.
In order to begin with the end in mind, develop a personal philosophy or creed. Start by the considering the example items below:
- Never compromise with honesty.
- Remember the people involved.
- Maintain a positive attitude.
- Exercise daily.
- Keep a sense of humor.
- Do not fear mistakes.
- Facilitate the success of subordinates.
- Seek divine help.
- Read a leadership book monthly.
By centering our lives on correct principles, we create a solid foundation for the development of the life-support factors of security, guidance, wisdom, and power. Principles are fundamental truths. They are tightly interwoven threads running with exactness, consistency, beauty, and strength through the fabric of life.
Habit 3: Put first things first
Habit 1 says, “You are the creator. You are in charge.” Habit 2 is the first creation and is based on imagination, leadership based on values. Habit 3 is practicing self-management and requires Habit 1 and Habit 2 as prerequisites. It is the day by day, moment-by-moment management of your time.
The time management Matrix is diagrammed below. Urgent means it requires immediate attention, and important has to do with results that contribute to your mission, goals, and values. Effective, proactive people spend most of their time in Quadrant 2, thereby reducing the time in Quadrant 1. Four activities are necessary to be effective. First, write down your key roles for the week (such as research manager, United Way chairperson, and parent). Second, list your objectives for each role using many Quadrant 2 activities. These objectives should be lies to your personal goals or philosophy in Habit 2. Third, schedule time to complete the objectives. Fourth, adopt the weekly schedule to your daily activities.
Habit 4: Think Win-Win
Win-Win is a frame of mind and heart that constantly seeks mutual benefit in all human interactions. Both sides come out ahead; in fact, the end result I usually a better way. If Win-Win is not possible, then the alternative is no deal. It takes great courage as well as consideration to create mutual benefits, especially if the other party is thinking Win-Lose.
Win-Win embraces five interdependent dimensions of life-character, relationships, agreements, systems and processes. Character involves the trains of integrity; maturity, which is a balance between being considerate of others and the courage to express feelings; and abundance mentality, which means that there is plenty out there for everyone. Relationships mean that the two parties trust each other and are deeply committed to Win-Win. Agreements require the five elements of desired results, guidelines, resources, accountability, and consequences. Win-Win agreements can only survive in a system that supports it, you can’t talk Win-Win and reward Win-Lose. In order to obtain Win-Win, a four-step process is needed: (1) see the problem from the other viewpoint; (2) identify the key issues and concerns, (3) determine acceptable results, and (4) seek possible new options to achieve those results.
Habit 5: Seek first to understand, then to be understood
Seek first to understand involves a paradigm shift since we usually try to be understood first. Listening is the key to effective communication. It focuses on learning how the other person sees the world, how they feel. The essence of Emphatic Listening is not that you agree with someone; it’s that you fully, deeply understand that person, emotionally as well as intellectually. Next to physical survival the greatest need of a human being is psychological survival to be understood, to be affirmed, to be validated, to be appreciated.
The second part of the habit is to be understood. Covey uses three sequentially arranged Greek words, ethos, pathos, and logos. Ethos is your personal credibility or character; pathos is the empathy you have with the other person’s communication; and logos is the logic or reasoning part of your presentation.
Habit 6: Synergy
Synergy means that the whole is greater than the parts. Together, we can accomplish more than any of us can accomplish alone. This can best be exemplified by the musical group The Beatles, who as group created more music than each individual created after the group broke up. The first five habits build toward Habit 6. It focuses the concept of Win-Win and the skills of emphatic communication on tough challenges that bring about new alternatives that did not exist before. Synergy occurs when people abandon their humdrum presentations and Win-Lose mentality and open themselves up to creative cooperation. When there is a genuine understanding, people reach solutions that are better than they could have achieved acting alone.
Habit 7: Sharpen the Saw (Renewal)
Habit 7 is taking time to Sharpen the Saw so It will cut faster. It is personal PC preserving and enhancing the greatest asset you have, which is you. It’s renewing the four dimensions of your nature physical, spiritual, mental, and social/emotional. All four dimensions of your nature must be used regularly wise and balanced ways. Regular renewing the physical dimension means following good nutrition, rest and relaxation, and regular exercise. The spiritual dimension is your commitment to your value system, renewal comes from prayer, meditation, and spiritual reading. The mental dimension is continuing to develop your intellect through reading, seminars, and writing. These three dimensions require that time be set aside, they are Quadrant 2 activities. The social and emotional dimensions of our lives are tied together because our emotional life is primarily, but not exclusively, developed out and manifested is our relationship with others. While this activity does not require time, it does require exercise.
An engineering device or process can be studied either experimentally (testing and taking measurements) or analytically (by analysis or calculations). The experimental approach has the advantage that we deal with the actual physical system, and the desired quantity is determined by measurement, within the limits of experimental error. However, this approach is expensive, time consuming, and often impractical. Besides, the system we are analyzing may not even exist. For example, the entire heating and plumbing systems of a building must usually be sized before the building is actually built on the basis of the specifications given. The analytical approach (including the numerical approach) has the advantage that it is fast and inexpensive, but the results obtained are subject to the accuracy of the assumptions, approximations, and idealizations made in the analysis. In engineering studies, often a good compromise is reached by reducing the choices to just a few by analysis, and then verifying the findings experimentally.
Modeling in Engineering
The descriptions of most scientific problems involve equations that relate the changes in some key variables to each other. Usually the smaller the increment chosen in the changing variables, the more general and accurate the description. In the limiting case of infinitesimal or differential changes in variables, we obtain differential equations that provide precise mathematical formulations for the physical principles and laws by representing the rates of change as derivatives. Therefore, differential equations are used to investigate a wide variety of problems in sciences and engineering (Fig. l–16). However, many problems encountered in practice can be solved without resorting to differential equations and the complications associated with them.
The study of physical phenomena involves two important steps.
In the first step, all the variables that affect the phenomena are identified, reasonable assumptions and approximations are made, and the interdependence of these variables is studied. The relevant physical laws and principles are invoked, and the problem is formulated mathematically. The equation itself is very instructive as it shows the degree of dependence of some variables on others, and the relative importance of various terms.
In the second step, the problem is solved using an appropriate approach, and the results are interpreted.
Many processes that seem to occur in nature randomly and without any order are, in fact, being governed by some visible or not-so-visible physical laws. Whether we notice them or not, these laws are there, governing consistently and predictably what seem to be ordinary events.
Preparing very accurate but complex models is usually not so difficult. But such models are not much use to an analyst if they are very difficult and time consuming to solve. At the minimum, the model should reflect the essential features of the physical problem it represents. There are many significant real world problems that can be analyzed with a simple model. But it should always be kept in mind that the results obtained from an analysis are at best as accurate as the assumptions made in simplifying the problem. Therefore, the solution obtained should not be applied to situations for which the original assumptions do not hold.
A solution that is not quite consistent with the observed nature of the problem indicates that the mathematical model used is too crude. In that case, a more realistic model should be prepared by eliminating one or more of the
questionable assumptions. This will result in a more complex problem that, of course, is more difficult to solve. Thus any solution to a problem should be interpreted within the context of its formulation.
Problem Solving Technique
The first step in learning any science is to grasp the fundamentals, and to gain a sound knowledge of it. The next step is to master the fundamentals by putting this knowledge to test. This is done by solving significant real-world
problems. Solving such problems, especially complicated ones, require a systematic approach. By using a step-by-step approach, an engineer can reduce the solution of a complicated problem into the solution of a series of simple problems (Fig. 1–17). When solving a problem, we recommend that you use the following steps zealously as applicable. This will help you avoid some of the common pitfalls associated with problem solving.
Step 1: Problem Statement
In your own words, briefly state the problem, the key information given, and the quantities to be found. This is to make sure that you understand the problem and the objectives before you attempt to solve the problem.
Step 2: Schematic
Draw a realistic sketch of the physical system involved, and list the relevant information on the figure. The sketch does not have to be something elaborate, but it should resemble the actual system and show the key features. Indicate any energy and mass interactions with the surroundings. Listing the given information on the sketch helps one to see the entire problem at once. Also, check for properties that remain constant during a process (such as temperature during an isothermal process), and indicate them on the sketch.
Step 3: Assumptions and Approximations
State any appropriate assumptions and approximations made to simplify the problem to make it possible to obtain a solution. Justify the questionable assumptions. Assume reasonable values for missing quantities that are necessary. For example, in the absence of specific data for atmospheric pressure, it can be taken to be 1 atm. However, it should be noted in the analysis that the atmospheric pressure decreases with increasing elevation. For example, it drops to 0.83 atm in Denver (elevation 1610 m).
Step 4: Physical Laws
Apply all the relevant basic physical laws and principles (such as the conservation of mass), and reduce them to their simplest form by utilizing the assumptions made. However, the region to which a physical law is applied must be clearly identified first. For example, the heating or cooling of a canned drink is usually analyzed by applying the conservation of energy principle to the entire can.
Step 5: Properties
Determine the unknown properties at known states necessary to solve the problem from property relations or tables. List the properties separately, and indicate their source, if applicable.
Step 6: Calculations
Substitute the known quantities into the simplified relations and perform the calculations to determine the unknowns. Pay particular attention to the units and unit cancellations, and remember that a dimensional quantity without a unit is meaningless. Also, don’t give a false implication of high precision by copying all the digits from the screen of the calculator—round the results to an appropriate number of significant digits.
Step 7: Reasoning, Verification, and Discussion
Check to make sure that the results obtained are reasonable and intuitive, and verify the validity of the questionable assumptions. Repeat the calculations that resulted in unreasonable values. For example, insulating a water heater that uses $80 worth of natural gas a year cannot result in savings of $200 a year.
Also, point out the significance of the results, and discuss their implications. State the conclusions that can be drawn from the results, and any recommendations that can be made from them. Emphasize the limitations under which the results are applicable, and caution against any possible misunderstandings and using the results in situations where the underlying assumptions do not apply. For example, if you determined that wrapping a water heater with a $20 insulation jacket will reduce the energy cost by $30 a year, indicate that the insulation will pay for itself from the energy it saves in less than a year. However, also indicate that the analysis does not consider labor costs, and that this will be the case if you install the insulation yourself.
Keep in mind that you present the solutions to your instructors, and any engineering analysis presented to others, is a form of communication. Therefore neatness, organization, completeness, and visual appearance are of utmost importance for maximum effectiveness. Besides, neatness also serves as a great checking tool since it is very easy to spot errors and inconsistencies in a neat work. Carelessness and skipping steps to save time often ends up costing more time and unnecessary anxiety.
Fundamentals of Thermal-Fluid Sciences by Yunus A. Çengel and Robert H. Turner.
Arduino is a single-board microcontroller meant to make the application more accessible which are interactive objects and its surroundings. The hardware features with an open-source hardware board designed around an 8-bit Atmel AVR microcontroller or a 32-bit Atmel ARM. Current models consists a USB interface, 6 analog input pins and 14 digital I/O pins that allows the user to attach various extension boards.
The Arduino Uno board is a microcontroller based on the ATmega328. It has 14 digital input/output pins in which 6 can be used as PWM outputs, a 16 MHz ceramic resonator, an ICSP header, a USB connection, 6 analog inputs, a power jack and a reset button. This contains all the required support needed for microcontroller. In order to get started, they are simply connected to a computer with a USB cable or with a AC-to-DC adapter or battery. Arduino Uno Board varies from all other boards and they will not use the FTDI USB-to-serial driver chip in them. It is featured by the Atmega16U2 (Atmega8U2 up to version R2) programmed as a USB-to-serial converter.
There are various types of Arduino boards in which many of them were third-party compatible versions. The most official versions available are the Arduino Uno R3 and the Arduino Nano V3. Both of these run a 16MHz Atmel ATmega328P 8-bit microcontroller with 32KB of flash RAM 14 digital I/O and six analogue I/O and the 32KB will not sound like as if running Windows. Arduino projects can be stand-alone or they can communicate with software on running on a computer. For e.g. Flash, Processing, Max/MSP). The board is clocked by a 16 MHz ceramic resonator and has a USB connection for power and communication. You can easily add micro SD/SD card storage for bigger tasks.
Features of the Arduino Uno Board
- It is an easy USB interface. This allows interface with USB as this is like a serial device.
- The chip on the board plugs straight into your USB port and supports on your computer as a virtual serial port. The benefit of this setup is that serial communication is an extremely easy protocol which is time-tested and USB makes connection with modern computers and makes it comfortable.
- It is easy-to-find the microcontroller brain which is the ATmega328 chip. It has more number of hardware features like timers, external and internal interrupts, PWM pins and multiple sleep modes.
- It is an open source design and there is an advantage of being open source is that it has a large community of people using and troubleshooting it. This makes it easy to help in debugging projects.
- It is a 16 MHz clock which is fast enough for most applications and does not speeds up the microcontroller.
- It is very convenient to manage power inside it and it had a feature of built-in voltage regulation. This can also be powered directly off a USB port without any external power. You can connect an external power source of upto 12v and this regulates it to both 5v and 3.3v.
- 13 digital pins and 6 analog pins. This sort of pins allows you to connect hardware to your Arduino Uno board externally. These pins are used as a key for extending the computing capability of the Arduino Uno into the real world. Simply plug your electronic devices and sensors into the sockets that correspond to each of these pins and you are good to go.
- This has an ICSP connector for bypassing the USB port and interfacing the Arduino directly as a serial device. This port is necessary to re-bootload your chip if it corrupts and can no longer used to your computer.
- It has a 32 KB of flash memory for storing your code.
- An on-board LED is attached to digital pin 13 to make fast the debugging of code and to make the debug process easy.
- Finally, it has a button to reset the program on the chip.
Arduino was created in the year 2005 by two Italian engineers David Cuartielles and Massimo Banzi with the goal of keeping in mind about students to make them learn how to program the Arduino uno microcontroller and improve their skills about electronics and use it in the real world.
Arduino uno microcontroller can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors, and other actuators. The microcontroller is programmed using the Arduino programming language (based on Wiring) and the Arduino development environment (based on Processing).
ATmega168/328-Arduino Pin Mapping
- The Arduino integrated development environment (IDE) is a cross-platform application written in Java, and is derived from the IDE for the Processing programming language and the Wiring projects
- The Arduino Uno board can be programmed with the Arduino software.
- Select “Arduino Uno from the Tools > Board menu (according to the microcontroller on your board).
- The ATmega328 on the Arduino Uno comes preburned with a bootloader that allows you to upload new code to it without the use of an external hardware programmer. It communicates using the original STK500 protocol.
- You can also bypass the bootloader and program the microcontroller through the ICSP (In-Circuit Serial Programming) header.
- The ATmega16U2 (or 8U2 in the rev1 and rev2 boards) firmware source code is available.
The ATmega16U2/8U2 is loaded with a DFU bootloader, which can be activated by:
- On Rev1 boards: connecting the solder jumper on the back of the board (near the map of Italy) and then resetting the 8U2.
- On Rev2 or later boards: there is a resistor that pulling the 8U2/16U2 HWB line to ground, making it easier to put into DFU mode.
You can then use Atmel’s FLIP software (Windows) or the DFU programmer (Mac OS X and Linux) to load a new firmware. Or you can use the ISP header with an external programmer (overwriting the DFU bootloader).
Real-Time Applications of Arduino Uno Board
The project is designed by using Arduino uno board for the development of home automation system with Bluetooth which is remotely controlled and operated by an Android OS smart phone. Houses are becoming smarter and well developed by using such kind of advanced technologies. Modern houses are gradually increasing the way of design by shifting to centralized control system with remote controlled switches instead of conventional switches.
Arduino Based Home Automation
In order to achieve this, a Bluetooth module is interfaced to the Arduino Uno board at the receiver end while on the transmitter end, a Graphical User Interface application on the cell phone sends ON/OFF commands to the receiver where loads are connected. By touching the identified location on the Graphical User Interface, lamps are used as loads in this project can be turned ON/OFF remotely by using this technology. The loads are operated by using Arduino Uno board through thyristors using triacs and OPTO-Isolators.
Arduino based Auto Intensity Control of Street Lights
As the intensity is cannot be controlled by using High Intensity Discharge (HID) lamps power saving is not possible in street lights with these lamps as the density on roads is decreasing from peak hours of nights to early morning.
Arduino Based Auto Intensity Control
Thus, this system overcomes this problem by controlling the intensity of LED lights on street by gradually reducing intensity by controlling the voltage applied to these lamps. This system uses arduino board to produce PWM pulses and it is programmed in such a way that it decreases the voltage applied to these lamps gradually till late nights and completely shutdowns at morning.
Thus, Arduino development board can sense the environment by receiving input from different sensors and affects its surroundings by controlling motors, lights and other actuators. The microcontroller on the board is programmed using the Arduino programming language. Thanks for your attention to this article and clarify doubts about Arduino projects by commenting below.
Arduino Uno Board with Real-Time Application Projects
What Is Haptic / Tactile Feedback?
Haptic / Tactile feedback (or haptics) is the use of advanced vibration patterns and waveforms to convey information to a user or operator. The word ‘haptics’ is derived from the Greek phrase ‘I touch’.
Many different applications can benefit from haptic feedback
Many products are designed to communicate with their users. Historically these have been audible and visual alerts, such as LEDs, beeps, bells, amongst others. Haptic feedback, and it’s simpler relative ‘vibration alerting’, are in increasing demand to augment or replace the old alert methods.
Haptics uses a vibrating component (sometimes called actuator) such as a vibration motor or a linear resonant actuator which is driven by an electronic circuit. It is common for a microcontroller to decided when to vibrate and with which pattern, and for a dedicated haptic driver chip to control the actuator. Of course there are a range of varieties on the engineering side, you can read about them in the Essentials for Haptics section of the Adding and Improving Haptics page. But first we continue with our introduction to haptic technology.
Where Can I Find Haptic Feedback?
Whilst the old LEDs and other notification methods are still effective in many applications, there are many others where product functionality can be improved by replacing or combining our sense of hearing and sight with our sense of touch.
Adding haptic feedback has two major benefits for manufacturers. First, it can improve user experience. Even everyday products are now being built with touch displays and interfaces. They’re cheaper to construct than control panels with buttons or switches, and designers can make context specific user interfaces simply by changing the graphical layout on the screen.
Second, haptics can also improve the performance of operators. Using vibrations to transmit information through the control system allows the user to concentrate on the task at hand. This can range from simpler input confirmations, safety alarms, or even positioning information. Medical applications that use a haptic confirmation when data has been entered has been proven to help reduce patient misdosing in hospitals.
Touchscreens implement haptic feedback
A simple example of these benefits can be found in tablet PCs. When typing on a touchscreen virtual keyboard, a simple short ‘button press’ effect lets the user know the keystroke has been recognised, and differentiates from a ‘long press’ effect. This haptic feedback enables the user to type quicker with less mistakes, whilst making the process less frustrating and more like typing on a real keyboard.[AdtoAppearHere]
The Difference Between Haptic Feedback And Vibration Alerting
This is often an area of confusion, especially because there is no strict guidance as to what separates the two. In truth, they are both very similar. Here at Precision Microdrives, we define the difference between haptics and vibration alerting by the complexity of the vibration pattern.
Although they both use vibrations to communicate with the user, the key difference is that haptic feedback devices often use a variety of advanced waveforms to convey information to the user. Vibration alerting products are less complicated and are generally designed to produce a strong enough vibration to alert the user of an event.
Haptics can improve existing feedback
Imagine a reversing car’s parking sensor where upon coming within 50 cm of an object the steering wheel begins to vibrate. This is an example of vibration alerting as it notifies the user of an event with a simple vibration pattern. In reality, the driver would prefer to know how far they are from the object. With haptic feedback we can transmit this information to the driver by varying the vibration strength or frequency over a range of distances. This also removes the high pitched beeping found on current sensors, and ensures the driver can use the parking sensor if they are in a loud environment or hard of hearing.
Here’s a useful infographic to help highlight the key differences between the two technologies:
Haptic Feedback vs Vibration Alerting
How Can I Experience Haptic Feedback?
Experiencing basic haptic feedback is fairly easy. As we mentioned above, it is present in a variety of real world applications and products. Most will easily understand simple haptics from using their smartphone, although this is just scratching the surface.
If you’re interested in experimenting with advanced haptic feedback, but don’t want to build the detailed circuitry, then our Haptic Feedback Evaluation Kit is the perfect solution. A list of all of it’s features can be found here, and you can order it online through our product catalogue.
By simulating both a handheld product and a mounted capacitive touch surface in one kit, you can evaluate haptic feedback and easily share your findings with colleagues and managers. It is an easy first step to improving your product with haptics.
Alternatively if you are ready to (or more interested in) building your own system, you can read our “Adding and Improving Haptic Feedback” page for circuit and design details on what is required for an effective haptic feedback solution. For further reading, see how we created our evaluation kit here.
Haptic Feedback at the Fingertips
From advanced sensors to artificial intelligence, vehicles of all types are quickly becoming home to the latest electronics technology.
The transportation space has seen a burst of technology—not in one particular area, but rather across the board from improvements in electrical power systems to extremely sophisticated telematics to self-driving cars. Cars today have more electronics that ever before. Much more is coming, though, as features such as advanced driver-assistance systems (ADAS) become standard features instead of expensive options.
These changes are being made possible by improvements in sensors, processors and memory, software, and even human interfaces that need to be integrated in real time (Fig. 1). Here are some of the latest technologies and how their relationship with other technologies makes them even more important in automotive environments.
1. Multiple, overlapping sensors are needed to provide information for systems to build situational awareness in order to implement safety-critical ADAS support.
Smartphones have turned tiny digital cameras into commodity items in a way that other applications—digital cameras, for instance—could not. Automotive applications continue to benefit from the availability of cameras that can stream 4K video. High-definition cameras are being used for obstacle and object recognition for forward-looking ADAS applications in conjunction with artificial-intelligence (AI) machine-learning (ML) software. Here the higher resolution is important, and it’s useful for backup cameras, too.
Multiple cameras are also being used to provide a birds-eye view around the car. Renesas’R-Car development kit knits together video streams from four cameras into a 360-degree view (Fig. 2). This is very useful when parking or navigating in tight quarters. More advanced ADAS systems highlight areas of potential oncoming collisions.
2. Renesas’ R-Car SoC is able to generate a 360-degree, birds-eye view around a vehicle by knitting together video streams from four cameras.
Two other range sensors that have shown significant improvements lately are LiDAR and phased-array radar. The general technology is not new, but major advances in miniaturization and cost reductions will affect when and where these systems are being utilized.
For example, Innoviz (Fig. 3), LeddarTech, Quanergy, and Velodyne are just a few companies delivering 3D, solid-state LiDAR systems. These systems, which are applicable in other areas like robotics (see “Bumping into Cobots “), are getting so small that multiple units will be hidden around a car.
3. Innoviz is just one of many vendors delivering 3D LiDAR technology. The InnovizOne has a 200-m range with better than 2-cm depth accuracy. It maintains a 100- by 25-degree field of view with 0.1- by 0.1-degree spatial resolution. The device delivers 25 frames/s with a 3D resolution rate of over 6 Mpixels/s.
Phased-array radar overcomes many of the limitations of LiDAR, allowing it to operate in rain and snow that can otherwise fool optical systems. Radar can be used to complement LiDAR and image systems. A number of companies are working to deliver technology in this area. For example, Texas Instruments’ (TI) single-chip millimeter-wave sensor, mmWave, handles 76- to 81-GHz sensor arrays for sensor and ADAS applications (see “Low-Cost Single Chip Drives Radar Array”).
All of these technologies have applications in other areas from manufacturing to security and even 3D scanning and printing.
AI and ML are garnering the limelight these days because they bring efficient image recognition to ADAS that’s critical for safe self-driving or augmented driving experiences. The underlying technology is based on deep neural networks (DNNs) and convolutional neural networks (CNNs) (see “What’s the Difference Between Machine Learning Techniques?”).
Neural networks will not replace conventional software applications, even in automotive environments, but they solve hard problems. Combined with new hardware, they can also do it in real time, which is needed in safety-critical applications such as self-driving cars. Multicore processors help in this case, but GPUs work even better (Fig. 4). Custom hardware bests them (see “CPUs, GPUs, and Now AI Chips”) all, and even specialized digital-signal processors (DSPs) can handle machine-learning chores (see “DSP Takes on Deep Neural Networks”).
4. The Drive PX2 from Nvidia is just the latest of a series of multicore CPU/GPU solutions targeted at automotive applications.
The parallel-processing nature of these solutions plays well to the multicore and transistor count growth in designs, even as upper-level clock frequencies have peaked. The more tailored solutions also have lower power requirements compared to more conventional processor solutions.
The in-vehicle infotainment (IVI) system advance is changing what drivers and passengers are able to visualize, as well as how they can link their smart devices and cloud-based applications to their car. Cellular-based Wi-Fi hot spots in a car are available from all vehicle manufacturers. The plethora of options requires a more robust and open approach. On that front, the GENIVI Alliance (see “Automotive Technology Platform Developed for Linux-Based Systems”) fosters open standards that are operating-system agnostic.
The The Linux Foundation’s Automotive Grade Linux (AGL) is one example of an IVI system that has received wide vendor support. AGL will be used in Toyota’s 2018 Camry (Fig. 5) as well as future Toyota vehicles (see “Toyota Including Automotive Grade Linux Platform in 2018 Camry”).
5. Toyota’s 2018 Camry will be running Automotive Grade Linux (AGL) for its in-vehicle infotainment (IVI) system.
The number of applications and tasks running on automotive systems can be staggering when one considers the amount of information being produced from the large collection of sensors, to the data processed and generated by AI systems, to streaming video moving over in-vehicle networks. Managing data distribution in safety-critical areas can benefit from standards like the Object Management Group’s (OMG) Data Distribution Service (DDS) that can provide secure, real-time, publish-subscribe managed data exchange throughout the system (see “Should DDS be the Base Communication Framework for Self-Driving Cars?”). This approach scales better than many point-to-point solutions typically found in designs that require fewer connections between applications.
Read Complete Blog by Clicking below link