Friday, February 5, 2016

Training Personnel Limits Forklift Accidents

When forklifts were first invented in the early Twentieth Century, they used basic hydraulic lifts for carrying weight loads that would be too difficult for a worker to carry by hand. This technology advanced when it became widespread after World War II, during a time when battery-powered forklifts were used to carry heavier loads to heights of over 50 feet with the use of a standardized pallet. This large-scale usage of forklifts incorporated greater variation in workloads and more personnel, which led to greater workplace hazards and an increased interest in keeping workers safe from them. Despite various strict guidelines for safety today, there is still an occurrence of forklift-related injuries and fatalities every year. These can easily be limited through adherence to worker training, legislation, and standardized practices.

According to OSHA, forklifts domestically cause 85 fatalities every year. However, this does not include all of the accidents associated with forklifts, which are significantly higher. Annually, 34,900 workers are injured, with another 61,800 the victims of incidents classified as non-serious. The Industrial Truck Association estimates that there are about 855,900 forklifts actively used in the United States, making 11 percent of all forklifts nationwide engaging in some kind of accident each year.


Training Personnel Limits Forklift Accidents


Causes for forklift injuries and fatalities include falling from forklifts, victims being crushed by forklifts, workers being struck by forklifts, and forklift overturns. Of these, forklift overturns is the most common, accounting for 42 percent of total forklift-related deaths. Overturns occur due to the proper imbalance of the forklift’s center of gravity. The center of gravity without any mechanical load is in the center of the forklift, but it is closer to the front of the machine when it is carrying a load. To compensate for this change, the forklift has a heavy counterweight on its rear tires, or it raises the operator with the forks. The shift in the center of gravity makes it difficult to turn or stop quickly, and if it is operated in a manner similar to an automobile, it will fall over. This can toss the operator out of the seat of the forklift, just before being crushed by the overloaded tipping machine.

All of these primary causes for accidents and fatalities can be easily reduced through proper training of personnel. OSHA advises this but also acknowledges that it would be impossible to eliminate all accidents. It is also essential that workers abide to the skills and knowledge taught during their training, which is currently an issue. For example, during most overturns, the worker can remain safe by simply staying in the forklift seat as the machine falls onto its side. Despite this, many deaths occur from the workers not wearing seatbelts while operating forklifts and falling out to their deaths when a turnover occurs.

There is also a variety of standards that ensure the safety of workers and the efficiency of forklifts. These include:




Thursday, February 4, 2016

ANSI Z136.6 - Safe Use of Lasers Outdoors

According to U.S.C. TITLE 18, CHAPTER 2 of FAA Modernization and Reform Act of 2012, it is a federal offense to knowingly aim the beam of a laser pointer at an aircraft in the United States. This greatly reduces the hazards associated with using recreational lasers in outdoor environments, but it does not eliminate all risks related to laser use. Lasers have a variety of practical uses outdoors and need to be properly handled in a manner that allows them to operate properly without harming the users and those that might be in the laser’s path. ANSI Z136.6-2015 - Safe Use of Lasers Outdoors specifies guidelines for designers, users, and operators of lasers or laser systems for outdoor use, and has many applications, including laser light shows, lasers used for outdoor scientific research, and military lasers.

One of the key concerns with using the Light Amplification by Stimulated Emission of Radiation outdoors is that the beams of the lasers can travel freely without being blocked by the same obstructions that prevent dispersal of lasers indoors. Diffraction of the laser beam can cause it to expand the further it travels, posing hazards to a greater amount of people. The standard also addresses atmospheric effects that can alter lasers only if they are used outdoors, such as temperature variations causing scintillation, which leads to alterations in the energy distribution of the energy beam, and aerosols causing attenuation, which alters the wavelength of the beams.


Outdoor Light Show


Outdoor lasers are often used for intentional illumination, such as in the case of outdoor light shows. This can present hazards to people’s vision and eyes, so it is essential that the operators of the lasers keep them contained to a particular area. While this is especially harmful at night, stray laser beams also pose dangers for people during the day. ANSI Z136.6-2015 summarizes the transient effects of intense light on visual function and the factors that control these. The standard also advises that lasers should not be used to intentionally interfere with vision.

Intentional illumination lasers are also concerning because of their ability to interfere with operators of heavy machinery. For example, upward-facing lasers used in outdoor scientific research have the potential to interfere with airplanes and even spacecrafts. As a precaution, regional FAA offices should be contacted prior to any laser operations that could interfere with air traffic.

ANSI Z136.6-2015 serves as an alternative for ANSI Z136.1-2014: Safe Use of Lasers, which defines the different laser safety classes and covers guidelines for creating a safe environment for using lasers indoors. These are available together in the ANSI Z136.1 AND Z136.6 Combination Set.

Preventing Mosquito-Borne Diseases

On February 1, 2016, the World Health Organization declared the Zika virus in Latin America to pose a global health emergency that requires a unified response, due to the underdeveloped brain birth defect microcephaly that it has reportedly caused in 4,000 newborn babies. There is no known cure for the Zika Virus, which is primarily transmitted by the Aedes mosquito. People in countries that have had reported cases of Zika can protect themselves in a manner no different than protection from other mosquito-spread viruses: by wearing clothing that leaves little exposed skin, using bug repellant, and avoiding remaining outdoors for prolonged periods of time.

All mosquito mouthparts contain needle-like mandibles and maxillae that allow them to feed on nectar and plant juices, and these anatomical structures have adapted the ability to puncture animal skin to drink blood. Males primarily drink nectar while the females often feed on blood, since they need the protein and iron found in blood to make eggs. In addition to the irritation caused by our allergic reaction from the sting mark, mosquitoes are harmful because carrying certain blood can turn them into vectors for pathogens for viruses, which can then be passed on through blood feeding to people and animals, spreading disease.


Aedes Mosquito
Aedes Mosquito

Different factors can determine the spread of mosquito-borne viruses. In the symbiotic relationship that the vector shares with the disease, the virus, whether it is Zika, West Nile, or Malaria, rarely harms the mosquito. The average lifespan of a mosquito is also very short, spanning somewhere between four days and one month. Because of this, mosquitoes have adapted to produce a massive amount of offspring, which emerge from the hundreds of eggs that the females plant in still water. This plentiful reproduction can be incredibly beneficial for certain viruses, since the vector of a virus passes that same virus onto its offspring, which can then spread it to the variety of animals from which the female mosquitoes feed.

In an environment where a mosquito species is in abundance, it will be selective on whom it feeds, looking for specific levels carbon dioxide and other chemicals coming off the body and in sweat on the skin. This gives them the best target with the most suitable blood. However, when the population has dwindled, the mosquitoes will feed on almost any animal, putting many humans and other animals at risk.

While the precautions taken from mosquito bites are almost entirely in the hands of the individual, there should be assurance that the methods of mitigating mosquito bites are reliable. Current mosquito repellents are generally created with DEET, picaridin, or permethrin as the primary chemical components that prevent mosquitos from harming humans. Of these, DEET is generally more effective than picaridin, and permethrin lasts very long but is only for use on clothes.


Applying bug spray


ASTM E939-94(2012) - Standard Test Method of Field Testing Topical Applications of Compounds as Repellents for Medically Important and Pest Arthropods (Including Insects, Ticks, and Mites): Mosquitoes provides testing guidelines to determine the resiliency of potential mosquito-repelling compounds that have already undergone some laboratory testing. The stage of the testing process that is addressed by the standard is for the direct use on skin, and involves applying a measured amount of the candidate material on a forearm or leg and continuously exposing that region to mosquitoes to determine the length of time that the treatment provides complete protection. This research ensures the reliability of a product and can help to discover new repellents.

Additional materials help to prevent exposure to mosquitoes. For example, screening enclosures used to keep mosquitoes out of homes and patios prevent the insects from even getting close access to people, reducing the need for repellant sprays and covering clothing. ASTM D3656/D3656M-13 - Standard Specification for Insect Screening and Louver Cloth Woven from Vinyl-Coated Glass Yarns provides guidelines to ensure that these screenings can give people the security of not worrying about acquiring viruses from mosquitoes.

For more information on the recent outbreak, visit the WHO Zika virus webpage.

Tuesday, February 2, 2016

3D Printing Human Organs

3D printing and additive manufacturing use a variety of materials to print out three-dimensional objects that suit many different manufacturing purposes, such as replacing a single broken part in an otherwise fine system. The 3D printing of human organs and other spatial cellular patterns and biological tissues, also called bioprinting, has the potential to replace a broken or unusable part of the human body.

The end product of 3D bioprinted cellular material differs from a cluster of cells cultured in a petri dish, since it actually recreates the multiple layers of cellular tissue. However, this is a great challenge for the creation of different complex organs in the human body. So far, the only organs and bodily structures that have been successfully 3D bioprinted are those that have particularly flat three-dimensional structures such as skin, tubular structures such as urine tubes and blood vessels, and hollow structures like the bladder.


3D Print Heart


The first patent for a 3D bioprinter was approved in 2006 for the “Ink-jet printing of viable cells”, in which 25 percent of the cells would remain viable after incubation for 24 hours. Traditional inkjet printers heavily inspired early 3D bioprinters, many of which held cells used for the printing process within the walls of inkjet cartridges. Today, bioprinters that follow this same model are able to print out human skin from an ink cartridge directly on a wound.

The company Organovo invented the first commercially produced bioprinter, called the NovoGen MMX Bioprinter. Organovo’s bioprinter uses “bio-ink”, or biological cellular blocks generated from the cells that are taken from the target tissue to represent key architectural and compositional elements. The printer distributes the bio-ink through a traditional layer-by-layer additive manufacturing approach, with an additional robot print head that can print out bio-inert hydrogel components to utilize as supports for structure.

This multilayered tissue derived from complex organs currently has uses in testing the impact of drugs and other pharmaceuticals without harming any humans or animals. In addition, there is much research being conducted to see how these tissues could potentially be used in the future as a source of therapy to treat patients with diseased or damaged tissue.  Organovo currently does printing work relating to the liver and its cells, and, since 2013, the organization has been able to create liver tissue with up to 20 cell layers for testing that can last longer than 40 days.




The term bioprinting is not just limited to the printing of biological material. It also applies to the 3D printing and additive manufacturing of different medical devices, such as prosthetics or titanium and ceramic surgical implants. Different replacement appendages designed and manufactured through 3D printing technology have been successfully grafted to the bodies of living things, such as a four-year old boy’s prosthetic ear and a replacement leg for a dog that lost its leg in an accident.

An issue common with all kinds of surgical implants and other foreign materials that is shared with bioprinted materials is that the body can reject them. This can be incredibly worrisome with the replacement of cells and organs, since they could be necessary for keeping the patient alive. Creating bioprinted materials that exactly match the tissues and organs from which they derive will be essential for the possibility of using them for transplantation.

Other issues will arise with bioprinting when it eventually leaves its experimental stage. For example, it will need to be decided whether it will be used simply for testing drugs and for education, or if complete printed organs will actually be transplanted into patients. Transplanting manufactured organs can save the lives of the masses that today require new organs from donors, including the 122,000 people in the United States Organ Transplant Waiting List. However, ethical issues related to printing human tissue are predicted to come up very soon, such as the possibility of enhancing these organs with “nonhuman” capabilities.

Friday, January 29, 2016

Ergonomic Hazards of Touch Screens

Touch screens have become a widely used input method for computers, cell phones, tablets, and video game controllers. While this type of gesture-based interface can be an easy and straightforward way of interacting with computers, the arm and hand movement needed for inputting with touch screens might not be ergonomically sound, and can possibly lead to health problems with those who use them.

The determination of physical input devices for the ergonomics of any kind of human-system interactions, not just touch screens, is addressed in ISO 9241-420:2011 - Ergonomics of human-system interaction - Part 420: Selection of physical input devices. This standard defines the different input devices detecting changes in user behavior that can be incorporated into workstations, along with clear objectives in determining the appropriate input devices to use depending on their usability relating to the worker’s specific tasks.

In addition, ISO 9241-400:2007 - Ergonomics of human-system interaction - Part 400: Principles and requirements for physical input devices gives guidelines for the use and implementation of those input devices. These specifications for hardware identify ergonomic needs for touch screens and related equipment, such as styluses and light pens.


Touch Screen Computer Workstation


The major concerns addressed with the ergonomics of computer workstations are straining from continuous hand movement, pain from improper sitting positions, or eye damage due to user distance from workstation. Straining of the fingers, arms, and legs are particularly alarming for touch screens, since these screens often require the interaction of the arm and shoulder during input, something that is rare with the use of keyboards and computer mice.

Luckily, many issues for the ergonomics of touch screens are not necessarily applicable to many devices with touch screens. For example, phones and tablets are generally used by hand, giving them a controllable axis of operations. By being able to freely move tablets and smart phones during interfacing, users do not apply significant strain to their arms and shoulders. Additionally, these devices are intended to be used for a much shorter period of time than a workstation computer.

Despite this, users can very easily use these devices for a long time, and most touch screens can ultimately be responsible for carpal tunnel and repetitive stress syndrome. However, these problems are much more likely to occur from the use of a touch screen laptop or desktop, in which the user’s finger takes the place of a mouse in moving the cursor. This is because the screen rests parallel to the body of the user, and he or she is forced to perform interface gestures during flexion and extension of the arm, hand, and fingers.


"Gorilla Arm" Shoulder Pain
Continuous computer touch screen use can cause "gorilla arm"

The shoulder pain that derives from continuous extension of the arm bones to interact with a touch screen has been nicknamed “gorilla arm” and was responsible for killing the first wave of touch computing in the 1980s. Additional problems with horizontal touchscreens include the difficulties of thick fingers interacting with scroll bars and menus that are designed for much smaller arrow cursors and the grease and moisture on the user’s fingers.

Touch screens are often very user-friendly interfaces, since users can feel as if they are directly interacting with their computer’s software. However, it is not ideal for a computer workstation to consist primarily of a horizontal touch screen, whether it is from a desktop, laptop, or horizontally fixed tablet. Instead, in accordance with ISO 9241-420:2011, a computer workstation should consist of a variety of input devices, and if there is a touch screen, it should only be used infrequently. In addition, manufacturers should follow ISO 9241-400:2007 so that their products conform to ergonomic recommendations.

Nuclear Power - The Most Misunderstood Source of Energy

Nuclear power might very well be the most misunderstood source of energy used to generate electricity. Fear generated from incidents in Chernobyl, Three Mile Island, and Fukushima, paired with the frightening fact that the concept of using nuclear fission to create electricity originated from the Manhattan Project, has given many people the image that nuclear power is not a pathway to finding a more sustainable form of generating electricity, but a channel that can only lead to destruction. While there is danger inherent in the use of nuclear power, there is some level of danger in almost all industries, something that is regulated by legislation and standardized practices.


Nuclear Power Plant Safety


In the 1970s, nuclear energy was rapidly growing and was projected to account for over 50 percent of the electrical energy needed in the United States by the Twenty-First Century. Unfortunately, the Three Mile Island Nuclear Disaster put a significant damper on this in 1979, and was followed by the even more-extreme Chernobyl Disaster seven years later. Despite this, nuclear power plants have not been wiped off the face of the Earth, and the United States remains as the world’s greatest producer of nuclear power, having it account for 19 percent of the nation’s total electrical output.

The usage of nuclear energy to generate electricity is a legacy of the Manhattan Project, just as is the entire United States Department of Energy. However, the accidents that caused the death of workers were not caused by an inherent destructive nature of nuclear technology; they came from the mismanagement of the workers and failures in the faulty equipment in the plant.

The events at Three Mile Island in 1979 occurred from either a mechanical or an electrical failure that prevented the main feedwater pumps from sending water to the steam generators to remove heat from the nuclear core, causing a shutdown of the turbine generator and later the reactor. The plant’s staff was unable to recognize the issue as it happened, leading to the disaster. In the next decade, the site of the Chernobyl nuclear reactors became an irradiated wasteland following the poorly conceived and executed testing of the reactor at low power, in which hot fuel particles reacted with water and caused a steam explosion. Much more recently, a major earthquake and the responding tsunami disabled the power supply and cooling of three Japanese nuclear reactors that were not suited for handling natural disasters, causing the Fukushima accident.


Three Mile Island Nuclear Disaster
Three Mile Island, Dauphin County, Pennsylvania


Legislation quickly followed the nuclear disaster at Three Mile Island by the United States government, and the Institute of Nuclear Power Operations was established to promote excellence in training, plant management, and operations, often by detailed examinations and evaluations of the power plants. This approach to solve issues through the proper training of personnel directly addressed the inability of the plant’s operators at Three Mile Island to understand the reactor’s condition. In addition, the United States established the Nuclear Regulatory Commission (NRC) to formulate policies and regulations governing nuclear reactors and materials safety. Today, the work of these two organizations, in addition to the World Association of Nuclear Operators (WANO), maintains a high level of safety and stability in the 104 nuclear reactors in the United States.

Standards also help to maintain safety for both the workers and the equipment at nuclear power plants. Different guidelines encompass specifications for the variety of materials and systems that keep nuclear reactors functioning to prevent future accidents. ANSI has undergone a joint initiative with the National Institute for Standards and Technology (NIST) called the Nuclear Energy Standards Coordination Collaborative (NESCC) to identify and respond to the current needs of the nuclear industry.

Examples of nuclear energy standards include:







However, while giving the nuclear energy strict regulations and guidelines in managing nuclear power plants helps to make the generation of power safe for personnel and the public accessing the electricity generated, there is still the issue of managing nuclear waste. It takes varying amounts of time for the different radioactive waste components to disintegrate; strontium-90 and cesium-137 have half-lives of about thirty years, and plutonium-239 has a half-life of 24,000 years. This makes it necessary to properly store this material. Unfortunately, this is challenging no matter what, since the plutonium waste taken from power plants now will exist until the year 50,000 AD.


Nuclear Waste Barrels
Radioactive waste, marked in accordance to ISO 21482:2007 - Ionizing Radiation Warning - Supplementary Symbol


Standards address guidelines for the proper methods of testing and managing radioactive waste resulting from nuclear power generation, in addition to guidelines for the decommissioning of nuclear facilities and the resulting waste. Some of these standards include:







Nuclear plant accidents have been disastrous, but they are in no way representative of the entire industry. If nuclear power plants are properly managed and operated, they can provide immense benefits to society. While nuclear is technically not a “clean” source of energy, it only emits any kind of greenhouse gas during the construction and demolition of the power plant. According to NASA’s Institute for Space Studies, nuclear power avoids 76,000 deaths annually from toxic air pollution. Additionally, at present consumption levels, the planet’s accessible uranium resources could keep nuclear reactors running for more than two hundred years

Salt Isn't Exactly What it Used to Be

Salt, despite being an essential component of our cooking and even a necessity to our diet, is not used primarily for human consumption. All of the salt manufactured for food processing only accounts for 4 percent of the total production in the United States. A much larger usage of salt is for the deicing of roads, sidewalks, and walkways with rock salt, which, in recent years, has accounted for 43 percent of the total production of salt in the country. ASTM D632-12 - Standard Specification for Sodium Chloride covers guidelines that determine the suitability of a sample of sodium chloride for use as road salt to melt ice and snow.

Throughout history, salt has maintained a status somewhat different than today. The idea of pouring salt on the ground would seem horrendous to the average ancient Roman, since to them sodium chloride was a rare necessity acquired through trade routes from faraway lands. It was so important to people and their well-being that the original Latin word for salt (sal) used by the Romans was derived from the Roman goddess of health, Salus. In fact, the crystalline powder was so essential to a soldier’s wages (it is even rumored that a fraction of the soldier’s pay was just salt) that their payment became known as solarium argentum. This term was adapted centuries later into the word salary. The original meaning of the word salary literally translates to “soldier’s compensation for the purchase of salt”.


Sodium Chloride Rock Salt
Rock Salt


Centuries later, once the Dark Ages took over following the collapse of the Roman Empire, salt remained a strong commodity, even taking the place of money. During this time, Moorish merchants traded salt ounce-for-ounce with gold, and in modern-day Ethiopia, blocks of rock salt, called amolés, became an official currency. This value of salt all throughout society did not soon diminish, and the amolé remained the official currency of the area until the Nineteenth Century. In other parts of the world, taxes on salt were considered unjustifiable, and might have even helped inspire the poor to revolt and commence the French Revolution.

Because of industrialization in the past century, salt does not equal its weight in gold, but is a commodity that most people can easily acquire. Salt, found naturally in the oceans, can be extracted through several different processes, and it is one of the most abundant minerals on Earth. The primary method for creating salt used for consumption is the solar evaporation process, in which salt water ponds are managed in warm climates so that all of the water will evaporate, leaving behind only the salt.

In addition, resulting from climatic shift throughout the geologic history of the Earth, there is a lot of salt in underground caverns that used to occupy oceans that have long dried up. These areas serve as the locations of rock salt, which is extracted through either traditional deep shaft mining or solution mining, in which water is injected to dissolve the salt, which is pumped out and heated to evaporate the water.


Road Salting
Sidewalk being salted with rock salt


ASTM D632-12 specifies the chemical and physical requirements for rock salt used for deicing roads and other purposes. Today, salt isn’t exactly what it used to be, but its use on pavement and other surfaces gives it a practical use to keep cities and towns running that the Romans would have never utilized.