Thursday, November 28, 2019

Cars Unique Types

Introduction Over the years, various types of cars have been developed. Manufacturing companies have designed and different types of cars to meet various consumer needs. However, there are three types of unique cars that have been marketed to meet specific consumer needs.Advertising We will write a custom essay sample on Car’s Unique Types specifically for you for only $16.05 $11/page Learn More Economy cars Characteristics: price and functionality Unique characteristic: Low price Example: Toyota corolla Sports cars Characteristics: Price and functionality Unique characteristic: enhanced handling and speed Example: PorscheAdvertising Looking for essay on land transport? Let's see if we can help you! Get your first paper with 15% OFF Learn More Luxury cars Characteristics: price and functionality Unique characteristics: Enhanced comfort and safety, status symbol. Example: Mercedes Benz S-Class Unique Cars There are numerous classes in which cars may be classified, however complete classification is hard to pin down as a car may fit into a range of classes, or not entirely meet the requirements of any class. Cars can be classified according to their size, performance, price, shape and mechanical specifications. There are generally three unique types of cars common to modern consumers: economy, sports and luxury cars. The most common type of car common in the market is the economy car. The economy car is designed and marketed in such a manner that the consumer can acquire it at a price much lower than the average cost of a new car. Economy cars vary with respect to profitability, size, performance and production numbers. During early production, most cars produced were expensive and could only be afforded by rich. However, production companies realized that they could boost up their profits by manufacturing affordable cars for the general population. The Model T car produced by Ford in 1908 became the first econom y car to be sold in the world. The first major characteristic of economy cars is that they have low prices. Economy cars are also usually small and the features of the car usually depend on the year of manufacture. They usually have the compulsory safety features such as safety belts but they may lack convenience features such as GPS systems and air conditioning. An example of an economy car is the Japanese Toyota Corolla which has sold more units than any other car in the world.Advertising We will write a custom essay sample on Car’s Unique Types specifically for you for only $16.05 $11/page Learn More The second type of unique cars is the sports car. Sports cars are vehicles designed to have better performance and power than normal cars. Many sports cars are designed for two passengers, have two doors and have sleek bodies. Originally, sports cars had small bodies however contemporary sports cars vary in size with many manufacturers increasing their seating room in order to enhance practicality. These types of cars are designed to do extremely well at maneuverability, acceleration, braking and top speed. The main distinguishing feature of sports cars is that the handling characteristics of these cars have been greatly improved. The driver is usually able to keep the car in control even under very difficult conditions. While a powerful engine is not a prerequisite, most sports cars do contain powerful engines. Sports cars are relatively expensive; much more expensive than economy cars and other typical cars common in the market. The Porsche Company is an example of a manufacturing company that has been linked with the production of unique sports cars designed to meet specific tastes and needs. As compared to other cars in the market, sports cars are intended to emulate sporting performance. The final type of unique cars common in the market is luxury or comfortable cars. These cars are designed to boost ease and comfort. T hese cars usually are geared for luxury and contain features that aim to achieve this goal. Luxury cars usually contain innovative equipments, greater performance and features designed to convey prestige or brand image. The principal characteristic of luxury cars is that they are designed for comfort far above typical cars. They usually contain features such as leather seats, custom dashboards, and anti-lock brakes. Modern luxury cars also offer better handling and performance but this is secondary to comfort and safety. Luxury cars are highly expensive and are usually targeted at wealthy buyers and collectors. The style of construction and technological features of luxury cars are such that they convey high class. As compared to the other cars common in the market, luxury cars are intended for comfort and to convey prestige and status of the owner. Mercedes Benz is a company that has been linked with the production of luxury cars including the Mercedes Benz S-Class model. Cars have evolved over the years to become an essential part of human life. There have been many types of vehicles designed over the years however three types of vehicles stand out in the contemporary market.Advertising Looking for essay on land transport? Let's see if we can help you! Get your first paper with 15% OFF Learn More Economy, sports and luxury cars are vehicles designed for a particular group of consumers and vary in terms of price, functionality and embedded features. Consumer power, taste and preferences are essential elements in deciding the type of car one will buy and it is for these reasons that most manufacturers seek to design and produce these three types of vehicles. This essay on Car’s Unique Types was written and submitted by user William N. to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

How to Go Viral With Brittany Thompson Of Virtual Resort Manager

How to Go Viral With Brittany Thompson Of Virtual Resort Manager â€Å"Going Viral† became a marketing buzz phrase in the 1990s and describes a piece of marketing content that generates a mythical resonance with an audience and spreads uncontrollably. For example, Hotmail had the idea to add â€Å"P.S. I Love You† at the end of every email users sent. The result: Big success and signing up 12 million users in just 18 months. But how do you keep such momentum going? What business results and revenue growth does this kind of phenomenon drive? In this episode, Brittany Thompson, social marketing and media manager at Virtual Resort Manager (VRM), talks about going viral for clients and how that shapes VRM’s marketing approach. Brittany knows how it feels shocking, unbelievable, amazing, and exhilarating to go from a few thousand to millions of fans and followers just overnight! Powered by PodcastMotor Actionable Content Marketing powered by By AMP072: Behind The Scenes Of Going Viral With Brittany Thompson Of Virtual Resort Manager 00:00/00:00 1x 100 > Download file Subscribe on iTunes Leave Review Share Topics Discussed in this Episode: Keep current on what’s happening and conduct research to determine what ideas are good for your industry Build a team by recognizing strengths and weaknesses Improve exposure and engagement with clients by auto scheduling posts Increase your company’s bandwidth Market your company and your clients at the same time Going viral is attainable when you know your audience’s wants and needs Keep the momentum going when successful by filtering content to meet clients’ needs Share your secrets, and learn from others Determine what makes the cut by filtering content by looking at analytics Emotionally resonate with your audience by knowing your brand better than anyone else Resources: Virtual Resort Manager to automatically schedule posts; and its podcasts Write and send a review to receive a care package with sweet swag If you liked today’s show, please subscribe on iTunes to The Actionable Content Marketing Podcast! The podcast is also available on SoundCloud,  Stitcher, and Google Play. Quotes by Brittany: â€Å"Anybody who is trying to learn to do marketing is you have to niche yourself down to into a specific industry†¦if you pick something, be the best at it.† â€Å"You need to focus on what you’re good at and allow other people to be good at what they’re good at.† â€Å"We’ve had multiple standout successes. One of the most notable successes as a team was that we actually had a post go viral with over 16 million views on it. It blew me away.†

Thursday, November 21, 2019

Gardner's Intelligences Assignment Example | Topics and Well Written Essays - 250 words

Gardner's Intelligences - Assignment Example The best way to depict Gardner’s Theory of Intelligence is as a wheel rather than in a linear manner. His theory of intelligence is broken up into the following categories: spatial, linguistic, logical-mathematical, bodily-kinesthetic, musical, interpersonal, intrapersonal, naturalistic, and existential. Spatial ability involves being able to visualize images such as puzzles using the mind. Linguistic intelligence is the intelligence of language. Everything dealing with words, reading, writing, etc. comes naturally to people with high linguistic intelligence. Logical-mathematical intelligence deals with mathematical computational abilities. Bodily-kinesthetic intelligence can be compared as being similar to athletic ability. People with good body-kinesthetic intelligence usually have good coordination as well as good reflexes. Musical intelligence coordinates to musical ability such as singing, recognizing tones, playing an instrument, etc. Interpersonal intelligence talks abo ut the ability of people to relate with one another and intrapersonal intelligence relates to the self-reflective behavior that we have within ourselves. Lastly, existential intelligence can be related to spirituality. Since there are so many aspects of intelligence, some of these are interconnected and it is possible to be intelligent in multiple areas. This theory also explains how people who are considered to have lower IQ scores could be gifted in other areas.

Wednesday, November 20, 2019

INTERNATIONAL BUSINESS Essay Example | Topics and Well Written Essays - 2250 words - 1

INTERNATIONAL BUSINESS - Essay Example Other mergers leads to successful results. There are various debates about mega-mergers which involves mergers of businesses worth above $20 billion. Mega-mergers have people who are sceptical about it whilst others are very positive about it. This paper examines the dominant arguments and debates about mega-mergers in the world of business today. It would involve a critique of the different arguments for and against the practice of mega-mergers in the current dispensation. The report would contrast various debates and ideas relating to them. â€Å"Most deals in 2013 will probably be fairly small, designed to strengthen or fill a gap in the buyer’s existing operations. These are known as â€Å"plug and play†. Transformational megamergers grew rarer in 2012, with only four deals topping $20 billion. That was the same as in 2011, and fewer than in each of the three previous years.† (The Economist 09/02/13) AT Kearney argue in the seminal article ‘Merger Endgames’ that global level mega-mergers are inevitable as part of the cycle of consolidation and concentration in globalising industries where firms seek to gain leverage and accelerate their presence. In contrast Ghemawat & Ghadar (2000) take the position that business leaders need to look away from mergers and be more innovative in their approach to international business. As seen from the cases in seminars, cross-border mega-mergers can be very successful or unsuccessful. Research consistently shows that the majority either fall short of their initial aspirations, lead to reduction in total shareholder value post-merger or even demerge and divest in extreme cases. Despite this, there has been a merger wave on an unprecedented scale up to 2007 and it isn’t as if the emergence of global industries and corporations is at an end. You are required to critically evaluate the arguments of the pro-merger and anti-merger schools and take a conclusive position on whether global

Monday, November 18, 2019

Assignment 4 Essay Example | Topics and Well Written Essays - 750 words

Assignment 4 - Essay Example Similar was the case when MRP II (Manufacturing Resource planning) came, it also worked on certain functional areas of an organization and other areas could not yield benefit from it. From 1975 till 1990 all the key players (Baan Corporation, Oracle Corporation, SAP, PeopleSoft) which now provide ERP solutions laid their sound foundation in the industry providing business solutions at various levels and each focusing on its core competency area. Though the actual development of ERP started from 1990’s onwards; people still argue that ERP existed in the form of earliest Inventory Control Systems MRP& MRP II only with additional facilities of integrating organizational activities and cross departmental communication. ERP in 90’s decade focused more or integration of business activities across functional departments and introducing of other business functions including CRM,SCM etc. Now the key ERP developers are working towards a web enabled ERP system making it much more user friendly allowing external access to authorized users. As the time passes ERP is now moving towards an ERPII which will further improve and enhance its competency and efficiency. Q:Briefly describe two main players (SAP and Oracle) in ERP market and explain what components are common in the two players’ ERP products Ans: SAP’ the venture was a joint effort of five former IBM employees who in the mid 70’s sat down with a vision of developing a software which would integrate business functions and process while setting certain standard in the market. As of 2009 SAP is the largest enterprise software company in the world best known for its ERP and business solution providing. On the other hand some 35 years ago initially two computer programmers later on joined by the third started working on an already present prototype on which no one was willing to put an effort into. They at that knew that using this prototype they can revolutionize the business computing. Oracle best known for its flagship product and Oracle Databases became the second largest enterprise software provider in the world after acquiring PeopleSoft in 2004 and by 2007 oracles had the largest software revenue. Both SAP and Oracle provide business solution hence they are working on the same line. Modules for business functions like CRM, SCM exist in both their ERP’s. Both are customizable according to the needs, environment and culture of the organization along with pursuing SaaS (software as a service) . Strategy of both have now changed to taking more time in implementation i.e. to satisfying the customer completely but at the same time the cost escalates aswell. SAP & Oracle both have their similarities and weakness but in the end it depends upon the structure of the organization and in the end result can vary for each individual organization. Q:What will ERP fix in a company? Ans: Implementation of ERP in an organization although is costly but in the long run i t bears more advantages then initially invested. Mainly organizations choose ERP to integrate and align the business functions and intra organizational as well as inter organization communication. ERP eliminates the risk and threat of manipulation of financial data by introducing data integrity throughout the organization. As all

Friday, November 15, 2019

Predicting Effects of Environmental Contaminants

Predicting Effects of Environmental Contaminants 1.1. Debunking some chemical myths†¦ In October 2008, the Royal Society of Chemistry announced they were offering  £1 million to the first member of the public that could bring a 100% chemical free material. This attempt to reclaim the word ‘chemical from the advertising and marketing industries that use it as a synonym for poison was a reaction to a decision of the Advertising Standards Authority to defend an advert perpetuating the myths that natural products were chemical free (Edwards 2008). Indeed, no material regardless of its origin is chemical free. A related common misconception is that chemicals made by nature are intrinsically good and, conversely, those manufactured by man are bad (Ottoboni 1991). There are many examples of toxic compounds produced by algae or other micro-organisms, venomous animals and plants, or even examples of environmental harm resulting from the presence of relatively benign natural compounds either in unexpected places or in unexpected quantities. It is therefore of prime impo rtance to define what is meant by ‘chemical when referring to chemical hazards in this chapter and the rest of this book. The correct term to describe a chemical compound an organism may be exposed to, whether of natural or synthetic origins, is xenobiotic, i.e. a substance foreign to an organism (the term has also been used for transplants). A xenobiotic can be defined as a chemical which is found in an organism but which is not normally produced or expected to be present in it. It can also cover substances which are present in much higher concentrations than are usual. A grasp of some of the fundamental principles of the scientific disciplines that underlie the characterisation of effects associated with exposure to a xenobiotic is required in order to understand the potential consequences of the presence of pollutants in the environment and critically appraise the scientific evidence. This chapter will attempt to briefly summarise some important concepts of basic toxicology and environmental epidemiology relevant in this context. 1.2. Concepts of Fundamental Toxicology Toxicology is the science of poisons. A poison is commonly defined as ‘any substance that can cause an adverse effect as a result of a physicochemical interaction with living tissue'(Duffus 2006). The use of poisons is as old as the human race, as a method of hunting or warfare as well as murder, suicide or execution. The evolution of this scientific discipline cannot be separated from the evolution of pharmacology, or the science of cures. Theophrastus Phillippus Aureolus Bombastus von Hohenheim, more commonly known as Paracelsus (1493-1541), a physician contemporary of Copernicus, Martin Luther and da Vinci, is widely considered as the father of toxicology. He challenged the ancient concepts of medicine based on the balance of the four humours (blood, phlegm, yellow and black bile) associated with the four elements and believed illness occurred when an organ failed and poisons accumulated. This use of chemistry and chemical analogies was particularly offensive to his contempo rary medical establishment. He is famously credited the following quote that still underlies present-day toxicology. In other words, all substances are potential poisons since all can cause injury or death following excessive exposure. Conversely, this statement implies that all chemicals can be used safely if handled with appropriate precautions and exposure is kept below a defined limit, at which risk is considered tolerable (Duffus 2006). The concepts both of tolerable risk and adverse effect illustrate the value judgements embedded in an otherwise scientific discipline relying on observable, measurable empirical evidence. What is considered abnormal or undesirable is dictated by society rather than science. Any change from the normal state is not necessarily an adverse effect even if statistically significant. An effect may be considered harmful if it causes damage, irreversible change or increased susceptibility to other stresses, including infectious disease. The stage of development or state of health of the organism may also have an influence on the degree of harm. 1.2.1. Routes of exposure Toxicity will vary depending on the route of exposure. There are three routes via which exposure to environmental contaminants may occur; Ingestion Inhalation Skin adsorption Direct injection may be used in environmental toxicity testing. Toxic and pharmaceutical agents generally produce the most rapid response and greatest effect when given intravenously, directly into the bloodstream. A descending order of effectiveness for environmental exposure routes would be inhalation, ingestion and skin adsorption. Oral toxicity is most relevant for substances that might be ingested with food or drinks. Whilst it could be argued that this is generally under an individuals control, there are complex issues regarding information both about the occurrence of substances in food or water and the current state-of-knowledge about associated harmful effects. Gases, vapours and dusts or other airborne particles are inhaled involuntarily (with the infamous exception of smoking). The inhalation of solid particles depends upon their size and shape. In general, the smaller the particle, the further into the respiratory tract it can go. A large proportion of airborne particles breathed through the mouth or cleared by the cilia of the lungs can enter the gut. Dermal exposure generally requires direct and prolonged contact with the skin. The skin acts as a very effective barrier against many external toxicants, but because of its great surface area (1.5-2 m2), some of the many diverse substances it comes in contact with may still elicit topical or systemic effects (Williams and Roberts 2000). If dermal exposure is often most relevant in occupational settings, it may nonetheless be pertinent in relation to bathing waters (ingestion is an important route of exposure in this context). Voluntary dermal exposure related to the use of cosmetics raises the same questions regarding the adequate communication of current knowledge about potential effects as those related to food. 1.2.2. Duration of exposure The toxic response will also depend on the duration and frequency of exposure. The effect of a single dose of a chemical may be severe effects whilst the same dose total dose given at several intervals may have little if any effect. An example would be to compare the effects of drinking four beers in one evening to those of drinking four beers in four days. Exposure duration is generally divided into four broad categories; acute, sub-acute, sub-chronic and chronic. Acute exposure to a chemical usually refers to a single exposure event or repeated exposures over a duration of less than 24 hours. Sub-acute exposure to a chemical refers to repeated exposures for 1 month or less, sub-chronic exposure to continuous or repeated exposures for 1 to 3 months or approximately 10% of an experimental species life time and chronic exposure for more than 3 months, usually 6 months to 2 years in rodents (Eaton and Klaassen 2001). Chronic exposure studies are designed to assess the cumulative toxici ty of chemicals with potential lifetime exposure in humans. In real exposure situations, it is generally very difficult to ascertain with any certainty the frequency and duration of exposure but the same terms are used. For acute effects, the time component of the dose is not important as a high dose is responsible for these effects. However if acute exposure to agents that are rapidly absorbed is likely to induce immediate toxic effects, it does not rule out the possibility of delayed effects that are not necessarily similar to those associated with chronic exposure, e.g. latency between the onset of certain cancers and exposure to a carcinogenic substance. It may be worth here mentioning the fact that the effect of exposure to a toxic agent may be entirely dependent on the timing of exposure, in other words long-term effects as a result of exposure to a toxic agent during a critically sensitive stage of development may differ widely to those seen if an adult organism is exposed to the same substance. Acute effects are almost always the result of accidents. Otherwise, they may result from criminal poisoning or self-poisoning (suicide). Conversely, whilst chronic exposure to a toxic agent is general ly associated with long-term low-level chronic effects, this does not preclude the possibility of some immediate (acute) effects after each administration. These concepts are closely related to the mechanisms of metabolic degradation and excretion of ingested substances and are best illustrated by 1.1. Line A. chemical with very slow elimination. Line B. chemical with a rate of elimination equal to frequency of dosing. Line C. Rate of elimination faster than the dosing frequency. Blue-shaded area is representative of the concentration at the target site necessary to elicit a toxic response. 1.2.3. Mechanisms of toxicity The interaction of a foreign compound with a biological system is two-fold: there is the effect of the organism on the compound (toxicokinetics) and the effect of the compound on the organism (toxicodynamics). Toxicokinetics relate to the delivery of the compound to its site of action, including absorption (transfer from the site of administration into the general circulation), distribution (via the general circulation into and out of the tissues), and elimination (from general circulation by metabolism or excretion). The target tissue refers to the tissue where a toxicant exerts its effect, and is not necessarily where the concentration of a toxic substance is higher. Many halogenated compounds such as polychlorinated biphenyls (PCBs) or flame retardants such as polybrominated diphenyl ethers (PBDEs) are known to bioaccumulate in body fat stores. Whether such sequestration processes are actually protective to the individual organisms, i.e. by lowering the concentration of the toxicant at the site of action is not clear (OFlaherty 2000). In an ecological context however, such bioaccumulation may serve as an indirect route of exposure for organisms at higher trophic levels, thereby potentia lly contributing to biomagnification through the food chain. Absorption of any compound that has not been directed intravenously injected will entail transfer across membrane barriers before it reaches the systemic circulation, and the efficiency of absorption processes is highly dependent on the route of exposure. It is also important to note that distribution and elimination, although often considered separately, take place simultaneously. Elimination itself comprises of two kinds of processes, excretion and biotransformation, that are also taking place simultaneously. Elimination and distribution are not independent of each other as effective elimination of a compounds will prevent its distribution in peripheral tissues, whilst conversely, wide distribution of a compound will impede its excretion (OFlaherty 2000). Kinetic models attempt to predict the concentration of a toxicant at the target site from the administered dose. If often the ultimate toxicant, i.e. the chemical species that induces structural or functional alterations resulting in toxicity, is the compound administered (parent compound), it can also be a metabolite of the parent compound generated by biotransformation processes, i.e. toxication rather than detoxication (Timbrell 2000; Gregus and Klaassen 2001). The liver and kid neys are the most important excretory organs for non-volatile substances, whilst the lungs are active in the excretion of volatile compounds and gases. Other routes of excretion include the skin, hair, sweat, nails and milk. Milk may be a major route of excretion for lipophilic chemicals due to its high fat content (OFlaherty 2000). Toxicodynamics is the study of toxic response at the site of action, including the reactions with and binding to cell constituents, and the biochemical and physiological consequences of these actions. Such consequences may therefore be manifested and observed at the molecular or cellular levels, at the target organ or on the whole organism. Therefore, although toxic responses have a biochemical basis, the study of toxic response is generally subdivided either depending on the organ on which toxicity is observed, including hepatotoxicity (liver), nephrotoxicity (kidney), neurotoxicity (nervous system), pulmonotoxicity (lung) or depending on the type of toxic response, including teratogenicity (abnormalities of physiological development), immunotoxicity (immune system impairment), mutagenicity (damage of genetic material), carcinogenicity (cancer causation or promotion). The choice of the toxicity endpoint to observe in experimental toxicity testing is therefore of critical importance. In recent years, rapid advances of biochemical sciences and technology have resulted in the development of bioassay techniques that can contribute invaluable information regarding toxicity mechanisms at the cellular and molecular level. However, the extrapolation of such information to predict effects in an intact organism for the purpose of risk assessment is still in its infancy (Gundert -Remy et al. 2005). 1.2.4. Dose-response relationships 83A7DC81The theory of dose-response relationships is based on the assumptions that the activity of a substance is not an inherent quality but depends on the dose an organism is exposed to, i.e. all substances are inactive below a certain threshold and active over that threshold, and that dose-response relationships are monotonic, the response rises with the dose. Toxicity may be detected either as all-or-nothing phenomenon such as the death of the organism or as a graded response such as the hypertrophy of a specific organ. The dose-response relationship involves correlating the severity of the response with exposure (the dose). Dose-response relationships for all-or-nothing (quantal) responses are typically S-shaped and this reflects the fact that sensitivity of individuals in a population generally exhibits a normal or Gaussian distribution. Biological variation in susceptibility, with fewer individuals being either hypersusceptible or resistant at both end of the curve and the maj ority responding between these two extremes, gives rise to a bell-shaped normal frequency distribution. When plotted as a cumulative frequency distribution, a sigmoid dose-response curve is observed ( 1.2). Studying dose response, and developing dose response models, is central to determining safe and hazardous levels. The simplest measure of toxicity is lethality and determination of the median lethal dose, the LD50 is usually the first toxicological test performed with new substances. The LD50 is the dose at which a substance is expected to cause the death of half of the experimental animals and it is derived statistically from dose-response curves (Eaton and Klaassen 2001). LD50 values are the standard for comparison of acute toxicity between chemical compounds and between species. Some values are given in Table 1.1. It is important to note that the higher the LD50, the less toxic the compound. Similarly, the EC50, the median effective dose, is the quantity of the chemical that is estimated to have an effect in 50% of the organisms. However, median doses alone are not very informative, as they do not convey any information on the shape of the dose-response curve. This is best illustrated by 1.3. While toxicant A appears (always) more toxic than toxicant B on the basis of its lower LD50, toxicant B will start affecting organisms at lower doses (lower threshold) while the steeper slope for the dose-response curve for toxicant A means that once individuals become overexposed (exceed the threshold dose), the increase in response occurs over much smaller increments in dose. Low dose responses The classical paradigm for extrapolating dose-response relationships at low doses is based on the concept of threshold for non-carcinogens, whereas it assumes that there is no threshold for carcinogenic responses and a linear relationship is hypothesised (s 1.4 and 1.5). The NOAEL (No Observed Adverse Effect Level) is the exposure level at which there is no statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The NOEL for the most sensitive test species and the most sensitive indicator of toxicity is usually employed for regulatory purposes. The LOAEL (Lowest Observed Adverse Effect Level) is the lowest exposure level at which there is a statistically or biologically significant increase in the frequency or severity of adverse effects between exposed population and its appropriate control. The main criticism of NOAEL and LOAEL is that there are dependent on study design, i.e. the dose groups selected and the number of individuals in each group. Statistical methods of deriving the concentration that produces a specific effect ECx, or a benchmark dose (BMD), the statistical lower confidence limit on the dose that produces a defined response (the benchm ark response or BMR), are increasingly preferred. To understand the risk that environmental contaminants pose to human health requires the extrapolation of limited data from animal experimental studies to the low doses critically encountered in the environment. Such extrapolation of dose-response relationships at low doses is the source of much controversy. Recent advances in the statistical analysis of very large populations exposed to ambient concentrations of environmental pollutants have however not observed thresholds for cancer or non-cancer outcomes (White et al. 2009). The actions of chemical agents are triggered by complex molecular and cellular events that may lead to cancer and non-cancer outcomes in an organism. These processes may be linear or non-linear at an individual level. A thorough understanding of critical steps in a toxic process may help refine current assumptions about thresholds (Boobis et al. 2009). The dose-response curve however describes the response or variation in sensitivity of a population. Biologica l and statistical attributes such as population variability, additivity to pre-existing conditions or diseases induced at background exposure will tend to smooth and linearise the dose-response relationship, obscuring individual thresholds. Hormesis Dose-response relationships for substances that are essential for normal physiological function and survival are actually U-shaped. At very low doses, adverse effects are observed due to a deficiency. As the dose of such an essential nutrient is increased, the adverse effect is no longer detected and the organism can function normally in a state of homeostasis. Abnormally high doses however, can give rise to a toxic response. This response may be qualitatively different and the toxic endpoint measured at very low and very high doses is not necessarily the same. There is evidence that nonessential substances may also impart an effect at very low doses ( 1.6). Some authors have argued that hormesis ought to be the default assumption in the risk assessment of toxic substances (Calabrese and Baldwin 2003). Whether such low dose effects should be considered stimulatory or beneficial is controversial. Further, potential implications of the concept of hormesis for the risk management of the combinations of the wide variety of environmental contaminants present at low doses that individuals with variable sensitivity may be exposed to are at best unclear. 1.2.5. Chemical interactions In regulatory hazard assessment, chemical hazard are typically considered on a compound by compound basis, the possibility of chemical interactions being accounted for by the use of safety or uncertainty factors. Mixture effects still represent a challenge for the risk management of chemicals in the environment, as the presence of one chemical may alter the response to another chemical. The simplest interaction is additivity: the effect of two or more chemicals acting together is equivalent to the sum of the effects of each chemical in the mixture when acting independently. Synergism is more complex and describes a situation when the presence of both chemicals causes an effect that is greater than the sum of their effects when acting alone. In potentiation, a substance that does not produce specific toxicity on its own increases the toxicity of another substance when both are present. Antagonism is the principle upon which antidotes are based whereby a chemical can reduce the harm ca used by a toxicant (James et al. 2000; Duffus 2006). Mathematical illustrations and examples of known chemical interactions are given in Table 1.2. Table 1.2. Mathematical representations of chemical interactions (reproduced from James et al., 2000) Effect Hypothetical mathematical illustration Example Additive 2 + 3 = 5 Organophosphate pesticides Synergistic 2 + 3 = 20 Cigarette smoking + asbestos Potentiation 2 + 0 = 10 Alcohol + carbon tetrachloride Antagonism 6 + 6 = 8 or 5 + (-5) = 0 or 10 + 0 = 2 Toluene + benzene Caffeine + alcohol Dimercaprol + mercury There are four main ways in which chemicals may interact (James et al. 2000); 1. Functional: both chemicals have an effect on the same physiological function. 2. Chemical: a chemical reaction between the two compounds affects the toxicity of one or both compounds. 3. Dispositional: the absorption, metabolism, distribution or excretion of one substance is increased or decreased by the presence of the other. 4. Receptor-mediated: when two chemicals have differing affinity and activity for the same receptor, competition for the receptor will modify the overall effect. 1.2.6. Relevance of animal models A further complication in the extrapolation of the results of toxicological experimental studies to humans, or indeed other untested species, is related to the anatomical, physiological and biochemical differences between species. This paradoxically requires some previous knowledge of the mechanism of toxicity of a chemical and comparative physiology of different test species. When adverse effects are detected in screening tests, these should be interpreted with the relevance of the animal model chosen in mind. For the derivation of safe levels, safety or uncertainty factors are again usually applied to account for the uncertainty surrounding inter-species differences (James et al. 2000; Sullivan 2006). 1.2.7. A few words about doses When discussing dose-response, it is also important to understand which dose is being referred to and differentiate between concentrations measured in environmental media and the concentration that will illicit an adverse effect at the target organ or tissue. The exposure dose in a toxicological testing setting is generally known or can be readily derived or measured from concentrations in media and average consumption (of food or water for example) ( 1.7.). Whilst toxicokinetics help to develop an understanding of the relationship between the internal dose and a known exposure dose, relating concentrations in environmental media to the actual exposure dose, often via multiple pathways, is in the realm of exposure assessment. 1.2.8. Other hazard characterisation criteria Before continuing further, it is important to clarify the difference between hazard and risk. Hazard is defined as the potential to produce harm, it is therefore an inherent qualitative attribute of a given chemical substance. Risk on the other hand is a quantitative measure of the magnitude of the hazard and the probability of it being realised. Hazard assessment is therefore the first step of risk assessment, followed by exposure assessment and finally risk characterization. Toxicity is not the sole criterion evaluated for hazard characterisation purposes. Some chemicals have been found in the tissues of animals in the arctic for example, where these substances of concern have never been used or produced. This realization that some pollutants were able to travel far distances across national borders because of their persistence, and bioaccumulate through the food web, led to the consideration of such inherent properties of organic compounds alongside their toxicity for the purpose of hazard characterisation. Persistence is the result of resistance to environmental degradation mechanisms such as hydrolysis, photodegradation and biodegradation. Hydrolysis only occurs in the presence of water, photodegradation in the presence of UV light and biodegradation is primarily carried out by micro-organisms. Degradation is related to water solubility, itself inversely related to lipid solubility, therefore persistence tends to be correlated to lipid solubility (Francis 1994). The persistence of inorganic substances has proven more difficult to define as they cannot be degraded to carbon and water. Chemicals may accumulate in environmental compartments and constitute environmental sinks that could be re-mobilised and lead to effects. Further, whilst substances may accumulate in one species without adverse effects, it may be toxic to its predator(s). Bioconcentration refers to accumulation of a chemical from its surrounding environment rather than specifically through food uptake. Conversely, biomagnification refers to uptake from food without consideration for uptake through the body surface. Bioaccumulation integrates both paths, surrounding medium and food. Ecological magnification refers to an increase in concentration through the food web from lower to higher trophic levels. Again, accumulation of organic compounds generally involves transfer from a hydrophilic to a hydrophobic phase and correlates well with the n-octanol/water partition coefficient (Herrchen 2006). Persistence and bioaccumulation of a substance is evaluated by standardised OECD tests. Criteria for the identification of persistent, bioaccumulative, and toxic substances (PBT), and very persistent and very bioaccumulative substances (vPvB) as defined in Annex XIII of the European Directive on the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) (Union 2006) are given in table 1.3. To be classified as a PBT or vPvB substance, a given compound must fulfil all criteria. Table 1.3. REACH criteria for identifying PBT and vPvB chemicals Criterion PBT criteria vPvB criteria Persistence Either: Half-life > 60 days in marine water Half-life > 60 days in fresh or estuarine water Half-life > 180 days in marine sediment Half-life > 120 days in fresh or estuarine sediment Half-life > 120 days in soil Either: Half-life > 60 days in marine, fresh or estuarine water Half-life > 180 days in marine, fresh or estuarine sediment Half-life > 180 days in soil Bioaccumulation Bioconcentration factor (BCF) > 2000 Bioconcentration factor (BCF) > 2000 Toxicity Either: Chronic no-observed effect concentration (NOEC) substance is classified as carcinogenic (category 1 or 2), mutagenic (category 1 or 2), or toxic for reproduction (category 1, 2 or 3) there is other evidence of endocrine disrupting effects 1.3. Some notions of Environmental Epidemiology A complementary, observational approach to the study of scientific evidence of associations between environment and disease is epidemiology. Epidemiology can be defined as â€Å"the study of how often diseases occur and why, based on the measurement of disease outcome in a study sample in relation to a population at risk.† (Coggon et al. 2003). Environmental epidemiology refers to the study of patterns and disease and health related to exposures that are exogenous and involuntary. Such exposures generally occur in the air, water, diet, or soil and include physical, chemical and biologic agents. The extent to which environmental epidemiology is considered to include social, political, cultural, and engineering or architectural factors affecting human contact with such agents varies according to authors. In some contexts, the environment can refer to all non-genetic factors, although dietary habits are generally excluded, despite the facts that some deficiency diseases are envir onmentally determined and nutritional status may also modify the impact of an environmental exposure (Steenland and Savitz 1997; Hertz-Picciotto 1998). Most of environmental epidemiology is concerned with endemics, in other words acute or chronic disease occurring at relatively low frequency in the general population due partly to a common and often unsuspected exposure, rather than epidemics, or acute outbreaks of disease affecting a limited population shortly after the introduction of an unusual known or unknown agent. Measuring such low level exposure to the general public may be difficult when not impossible, particularly when seeking historical estimates of exposure to predict future disease. Estimating very small changes in the incidence of health effects of low-level common multiple exposure on common diseases with multifactorial etiologies is particularly difficult because often greater variability may be expected for other reasons, and environmental epidemiology has to rely on natural experiments that unlike controlled experiment are subject to confounding to other, often unknown, risk factors. However, it may still be of i mportance from a public health perspective as small effects in a large population can have large attributable risks if the disease is common (Steenland and Savitz 1997; Coggon et al. 2003). 1.3.1. Definitions What is a case? The definition of a case generally requires a dichotomy, i.e. for a given condition, people can be divided into two discrete classes the affected and the non-affected. It increasingly appears that diseases exist in a continuum of severity within a population rather than an all or nothing phenomenon. For practical reasons, a cut-off point to divide the diagnostic continuum into ‘cases and ‘non-cases is therefore required. This can be done on a statistical, clinical, prognostic or operational basis. On a statistical basis, the ‘norm is often defined as within two standard deviations of the age-specific mean, thereby arbitrarily fixing the frequency of abnormal values at around 5% in every population. Moreover, it should be noted that what is usual is not necessarily good. A clinical case may be defined by the level of a variable above which symptoms and complications have been found to become more frequent. On a prognostic basis, some clinical findings may carry an a dverse prognosis, yet be symptomless. When none of the other approaches is satisfactory, an operational threshold will need to be defined, e.g. based on a threshold for treatment (Coggon et al. 2003). Incidence, prevalence and mortality The incidence of a disease is the rate at which new cases occur in a population during a specified period or frequency of incidents. Incidence = The prevalence of a disease is the proportion of the population that are cases at a given point in time. This measure is appropriate only in relatively stable conditions and is unsuitable for acute disorders. Even in a chronic disease, the manifestations are often intermittent and a point prevalence will tend to underestimate the frequency of the condition. A better measure when possible is the period prevalence defined as the proportion of a population that are cases at any time within a stated pe

Wednesday, November 13, 2019

The Evolution of Religion Essay -- Philosophy Religion Essays

The Evolution of Religion Near the end of his novel, Darwin's Dangerous Idea, Daniel Dennett questions religion and contends that it was an evolutionary process to keep humans entertained. He says "they [religions] have kept Homo Sapiens civilized enough, for long enough, for us to have learned how to reflect more systematically and accurately on our position of the universe"(519). Dennett's position is a controversial one, and it is difficult to argue because it is such an abstract subject. Religion is associated with free will, and has been part of humans for thousands of years. Is religion as we know it useless now, have we arrived at the point in evolution where it is no longer necessary? Dennett never completely dismisses current religion, but he does not support its perpetuation either. Dennett's view of religion is as function, something that humans need, like opposable thumbs. He claims that religion has become merely about the actions, and that soon they will die out and belong in museums and "zoos". Dennett elaborates this thought, "what,then, of all the glories of our religious traditions? They should certainly be preserved, as should the languages, the art, the costumes the rituals, the monuments"(519). Is this right? Should only the material aspects be saved? Have they served their only purpose. Dennett seems to say that humans no longer need religions, and that since they have existed for so long they are no longer needed, it is their time for extinction. Will religions disappear leaving only the materials and traditions as Dennett seems to suggest they will, or will they evolve, and change to meet our modern world. In Karen Armstrong's History of God she says "for 4,000 years it [the idea of God] has cons... ...sappeared, but they became infused into other religions. The ancient Hellenistic religion became infused into Christianity, and the Sumerian religion was an influence for the writers of the old testament (http://www.comparative-religion.com/ancient/). In that sense the ancient religions continue to exist, they have merely taken a different form. Will the modern religions of today follow a path decreed by Dennett and Armstrong and disappear, or will they merely become influences in the next wave of religion? The major religions today have been in existence for thousands of years, but that does not mean that they will not evolve. As people and culture change, so will the worlds religions. People will always have faith, and humans have not achieved a point in evolution where religion is no longer needed, and it is highly unlikely that it will ever reach that point. The Evolution of Religion Essay -- Philosophy Religion Essays The Evolution of Religion Near the end of his novel, Darwin's Dangerous Idea, Daniel Dennett questions religion and contends that it was an evolutionary process to keep humans entertained. He says "they [religions] have kept Homo Sapiens civilized enough, for long enough, for us to have learned how to reflect more systematically and accurately on our position of the universe"(519). Dennett's position is a controversial one, and it is difficult to argue because it is such an abstract subject. Religion is associated with free will, and has been part of humans for thousands of years. Is religion as we know it useless now, have we arrived at the point in evolution where it is no longer necessary? Dennett never completely dismisses current religion, but he does not support its perpetuation either. Dennett's view of religion is as function, something that humans need, like opposable thumbs. He claims that religion has become merely about the actions, and that soon they will die out and belong in museums and "zoos". Dennett elaborates this thought, "what,then, of all the glories of our religious traditions? They should certainly be preserved, as should the languages, the art, the costumes the rituals, the monuments"(519). Is this right? Should only the material aspects be saved? Have they served their only purpose. Dennett seems to say that humans no longer need religions, and that since they have existed for so long they are no longer needed, it is their time for extinction. Will religions disappear leaving only the materials and traditions as Dennett seems to suggest they will, or will they evolve, and change to meet our modern world. In Karen Armstrong's History of God she says "for 4,000 years it [the idea of God] has cons... ...sappeared, but they became infused into other religions. The ancient Hellenistic religion became infused into Christianity, and the Sumerian religion was an influence for the writers of the old testament (http://www.comparative-religion.com/ancient/). In that sense the ancient religions continue to exist, they have merely taken a different form. Will the modern religions of today follow a path decreed by Dennett and Armstrong and disappear, or will they merely become influences in the next wave of religion? The major religions today have been in existence for thousands of years, but that does not mean that they will not evolve. As people and culture change, so will the worlds religions. People will always have faith, and humans have not achieved a point in evolution where religion is no longer needed, and it is highly unlikely that it will ever reach that point.