Wednesday, November 27, 2019

To Uphold The Law Through The Investigation Of Violations Of Federal E

To uphold the law through the investigation of violations of federal criminal law; to protect the U.S. from foreign intelligence and terrorist activities; to provide leadership and law enforcement assistance to federal, state, local, and international agencies; and to perform these responsibilities in a manner that is responsive to the needs of the public and is faithful to the constitution of the U.S.: this is the mission of the Federal Bureau of Investigation. The agency now known as the Federal Bureau of Investigation was founded in 1908 when the Attorney General appointed an unnamed force of Special Agents to be the investigative force of the Department of Justice (DOJ). Before that time, the DOJ had to borrow Agents from the U.S. Secret Service to investigate violations of federal criminal laws within its jurisdiction. In 1909, the Special Agent Force was renamed the Bureau of Investigation, and after a series of name changes, it received its present official name in 1935. During the early period of the FBIs history, its agents investigated violations of mainly bankruptcy frauds, antitrust crime, and neutrality violation. During World War One, the Bureau was given the responsibility of investigating espionage, sabotage, sedition (resistance against lawful authority), and draft violations. The passage of the National Motor Vehicle Theft Act in 1919 further broadened the Bureau's jurisdiction. After the passage of Prohibition in 1920, the gangster era began, bringing about a whole new type of crime. Criminals engaged in kidnapping and bank robbery, which were not federal crimes at that time. This changed in 1932 with the passage of a federal kidnapping statute. In 1934, many other federal criminal statutes were passed, and Congress gave Special Agents the authority to make arrests and to carry firearms. The FBIs size and jurisdiction during the second World War increased greatly and included intelligence matters in South America. With the end of that war, and the arrival of the Atomic Age, the FBI began conducting background security investigations for the White House and other government agencies, as well as probes into internal security matters for the executive branch of the government. In the 1960s, civil rights and organized crime became major concerns of the FBI, and counterterrorism, drugs, financial crime, and violent crimes in the 1970s. These are still the major concerns of the FBI, only now it is to a greater extent.. With all of this responsibility, it is logical to say that the FBI is a field-oriented organization. They have nine divisions and four offices at FBI Headquarters in Washington, D.C. These divisions and offices provide direction and support services to 56 field offices and approximately 10,100 Special Agents and 13,700 other employees. Each FBI field office is overseen by a Special Agent in Charge, except for those located in New York City and Washington, D.C. Due to their large size, those offices are each managed by an Assistant Director in Charge. FBI field offices conduct their official business both directly from their headquarters and through approximately 400 satellite offices, known as resident agencies. The FBI also operates specialized field installations: two Regional Computer Support Centers; one in Pocatello, Idaho, and one in Fort Monmouth, New Jersey -- and two Information technology Centers (ITCs); one at Butte, Montana, and one at Savannah, Georgia. The ITCs provide information services to support field investigative and administrative operations. Because they do have so much responsibility, their investigative authority is the broadest of all federal law enforcement agencies. The FBI also stresses long term, complex investigation, emphasize close relations and information sharing with other federal, state, local, and foreign law enforcement and intelligence agencies. A significant number of FBI investigations are conducted with other law enforcement agencies or as part of joint task forces. As part of this process, the FBI has divided its investigations into the following programs: Applicant Program Department of Energy and Nuclear Regulatory Commission Applicants Department of justice Candidates FBI Special Agents and Support Applicants and others Civil Rights Program Civil Rights Act of 1964 Discrimination in Housing Equal Credit Opportunity Act Counterterrorism Program Hostage taking Sabotage Attempted of Actual Bombings and others Financial Crime Program Bank Fraud and Embezzlement Environmental Crimes Fraud Against the Government and others Foreign Counterintelligence Programs Espionage Foreign Counterintelligence Matters  Organized Crime/Drug Program Drug Matters Money Laundering Organized Crime/Drug Enforcement Task Force Matters and others Violent Crimes and Major Offenders Program Theft of Government Property Crime Aboard Aircraft Kidnapping - Extortion and others These programs cover most everything that the FBI investigates, and some individual cases in a program often receives extensive investigative attention because of their size, potential impact, or sensitivity. Because FBI Special Agents are responsible for handling so many different things, they

Sunday, November 24, 2019

Essay on Global ConflictEssay Writing Service

Essay on Global ConflictEssay Writing Service Essay on Global Conflict Essay on Global ConflictRecent Ukrainian crisis of 2014, which began with Euromaidan Revolution and the overthrow of the corrupted regime of Viktor Yanukovych, and the subsequent annexation of the Crimea Peninsula and the armed conflict in Donbas (region in eastern Ukraine) became the epicenter of global conflict and dramatically affected the international correlation of forces. This conflict could be most effectively considered through the prism of realism paradigm and the Billiard Ball Theory in particular, which implies that international relations can be understood in terms of the movements of certain states in respect to the other ones and pressure they apply as the result of power manipulations (Baysha, 2014).Thus, contemporary Ukraine, by virtue of its geographical location and characteristics of the political and economic situation (poverty, pervasive corruption and shadow economy), has found itself in the zone of geopolitical interests of the major players, and in fact a sm all coin in the geopolitical games of the US, the EU, Russia and China. Here, the US interests, acting through the EU, and primarily through Poland, lie in using Ukraine to weaken Russia and prevent the development and strengthening of the Eurasian economic and military bloc. Russia’s interests consist in using Ukraine to weaken the United States and to avoid the disposition of NATO troops and bases on the Russian border, as well as limit the economic expansion of the EU. China’s interests lie in using Ukraine as a transport corridor for Chinese products to the EU and in turning Ukraine into its raw material appendage with cheap labor force. All geopolitical players are using Ukraine exceptionally for their own purposes, and all their actions are aggressive with the only difference that the aggression of the US, the EU and Russia has an uncovered economic and military component, whereas the aggression of China is determined by a new latent technology the seizure of te rritories through economic expansion (Milevski, 27-29).Real manifestation of these interests was revealed in relation to Ukraine’s decision to sign the association agreement with the EU with the prospect of joining the EU and NATO membership. Initially, the Revolution of Dignity was caused by the general crisis of the representative democracy in Ukraine, and then actively inspired and supported by the US and the EU, clearly aimed at Ukraine’s geopolitical absorption. The success of their common policies led to the change of the political regime in Ukraine: after the pro-Russian President Viktor Yanukovych fled the country, the politicians fighting for Ukraine’s European choice came to power. This, however, could not be easily seen by Russia. Under the guise of protecting the Russian-speaking citizens of Ukraine in Crimea, Russia as a permanent member of the UN with the veto right, having flagrantly violated international law, the resolutions of the UN, OSCE and PACE, the postwar agreement on the status quo, and nuclear disarmament of Ukraine in 1993, as well as the Constitution of Ukraine, first captured and later annexed the Crimea (Tannenbaum, 8).Indeed, the unilateral secession of the Crimea from Ukraine and joining Russia cannot be considered legitimate. At the time of the collapse of the Soviet Union in 1991, the Crimea was part of the USSR (Ukrainian Soviet Socialist Republic) voluntarily according to a popular referendum, therefore the secession of the Crimea from Ukraine especially under the pressure of ill-concealed aggression from a neighboring state is a violation of the principle of territorial integrity. Despite the official recognition of occupation by the UN General Assembly Resolution 68/262 of March 27, 2014 about the support of territorial integrity of Ukraine and general condemnation of Russia’s actions on the part of world powers, their policy of containment and even appeasement failed (Milevski, 29-33).Moreover, steadily following the idea of the restoration of Russian civilization space and the establishment of controlled regimes in the occupied territories, Russia initiates the emergence of breakaway republics DPR (Donetsk People’s Republic) and LPR (Lugansk People’s Republic) in April 2014, and supports the illegal armed groups operating in the Donbas, supplying them with ammunition, modern weapons, heavy military equipment etc.(Wawrzonek, 760-761). There are also compelling data of the monitoring mission of the Office of the UN High Commissioner for Human Rights that a significant part of the regular troops of Russia, tanks, infantry fighting vehicles and other military equipment also acts on the territory of Ukraine. Thus, Amnesty International considers the ongoing events as an international armed conflict with Russia being also involved in it. Fighting in eastern Ukraine, continuing since April 2014, led to significant destruction of civilian infrastructure and hundred s of thousands of refugees. According to the UN by October 29 at least 4035 people were killed (including 298 passengers on board MH17) and 9336 wounded in eastern Ukraine. According to the UN data of the beginning of October, over 5 million people live within the conflict zone, 379,000 people moved to other regions of Ukraine, and more than 426,000 moved to neighboring countries (United Nations).Response to Russia’s military invasion of the eastern borders of Ukraine has become an economic war with Russia. Official sanctions alongside unofficial policy to reduce oil prices as the basis of Russia’s well-being today led to a record drop in the ruble (Russian currency), mass withdrawal of foreign investment, postponement for the construction of South Stream gas pipeline from Russia to the EU, and the forced necessity to sign gas agreement at the lowest competitive prices. The financial pressure is now forcing Russian companies to ask for public funding as a palliative, w hich creates an additional burden on the state budget. In this way, Russian strategic decision regarding the maintenance of the Ukrainian crisis in order to ensure long-term profitability of Russian Gas Company Gazprom has not yet been realized, while the US and its allies have managed to temporarily weaken Russia, as well as create opportunities for future energy independence. It should be noted that the European Union with the second largest trade balance after China has always been a profitable strategic partner of NATO, and in addition to the political significance this alliance promises great benefits to American producers of shale gas. In addition, Ukraine’s neighboring countries Turkey, Poland and Romania are intensively making all efforts trying to strengthen their regional position (Wawrzonek, 773-780).At the same time, there are several pro-Russian governments in the European Union. It is primarily Hungary, where authoritarian statist regime of Orban is trying to ba lance its dependence on the Brussels by project with Moscow, including projects of nuclear power plants construction for the Russian loan. In turn, Slovakia and Greece are also traditional partners of Russia, as well as the Czech Republic, where President Zeman openly takes a very pro-Russian stance. However, all of these players are the beneficiaries, not the donors to the EU. At the same time, the growing rhetoric of contradictions and mutual threats directly refers the society to the days of the Cold War (Roskin, 5-9).Indeed, the forgotten bipolar world system is clearly being revived. We can say that the era that began with the famous Yalta meeting between Churchill, Roosevelt and Stalin, when the foundations of the post-war architecture were determined, ended in the very same Yalta with the annexation of the Crimea. Post-Yalta architecture assumed the structure of the world when the great powers, which at the time were on the verge of developing nuclear weapons, understood that the world is not entirely fair but there is a need to fix the status quo. Fixing the status quo in the large countries, it was permissible to fight for areas of interest somewhere in Afghanistan, Africa and Korea, that is, in the Third World. Now that world order has been broken. This essentially means that the UN Security Council does not work, because in the architecture where each permanent member has a veto, Russia would not veto itself, or rather would not vote for sanctions against itself and condemnation of aggression. The OSCE does not work either, and we see that the two countries that are members of the Organization for Security and Cooperation in Europe are actually in a state of an undeclared war. The effectiveness of the Council of Europe is also questioned as it is based on democratic values, including on the inviolability of borders with Russia and Ukraine being the members of it.Thus, the greatest global challenge of the Ukrainian crisis is the fact that what was bu ilt after World War II and reformed in the early 1990’s now completely ceased to function. We are facing a global problem to fill the void: will the world fall apart into small artificially national countries with each creating temporary alliances to their interests or will we reanimate the international security architecture, or will we create something new instead of the current system? As for the local aspects of this problem, the possible solution to the conflict in Ukraine is the signing of a tripartite agreement (the EU Ukraine – the Customs Union) on the joint sustainable development, using attention of all the countries to the situation in Ukraine. Objectives of the Ukrainian authorities are to find instruments to influence geopolitical players and implement a global reform of the country. Today, Ukraine receives the next (and possibly the last) chance in its history, passing a kind of turning point, which can be overcome in favor of statehood only by strong f ocus of all efforts on achieving the strategic goal to transform the country into a powerful unitary state, a regional leader capable of guaranteeing security and high social standard of living of its citizens. From the standpoint of liberalism, the inner way out of this political crisis is possible by switching to the cluster system of territory administration and people power with elements of direct democracy.

Thursday, November 21, 2019

Murder by death Movie Review Example | Topics and Well Written Essays - 500 words

Murder by death - Movie Review Example If it is the fact that he is not really murdered, ultimately Twain’s superiority as a detective over the detective celebrities, as he claims before the assembly of the detectives in his house. Therefore, Twain’s hold on the whole scheme of plot of the movie clearly indicates that he is the main suspect behind his own murder. Even though Dick Charlton appears in such a way, which unexpectedly asserts that he is the only person, who is motivated strongly enough to commit the crime of murder, in no way he can be associated with the murder. If a prying eye is focused deep into the actions that he performs during the assembly in the millionaire Lionel Twain’s home, one will necessarily be convinced that he is the only character who is mentally powerful, whereas other characters are too busy with their own oddities to be engaged with the intention of murder. The strong motive evolves from his ego, as he speaks of Twain’s suicide, â€Å"The motive is simple: ego. If we were not to solve this crime, he would indeed be named the worlds foremost detective. And with an ego like his, the fact that he had to die for it would be a small price to pay† (Simon, Murder by Death). Charleston’s evaluation of Twain is true to some extent, but it is flawed on the point that Twain’s motive i s to be the greatest detective of the world. At the news of the butler’s death Charlton does not get the opportunity to go the kitchen. As a result he does not get the opportunity to have the butcher’s knife with which Twain was killed. So Charleston is out of suspicion. One of the characters, Monsieur Perrier’s motive and opportunities are analyzed, he, in no way, can be taken into account as the assumed murderer of Twain. Perrier got the opportunity to go to investigate the butler’s death. They found butler dead. The key that he

Wednesday, November 20, 2019

Literature Research and PICO Question Annotated Bibliography

Literature Research and PICO Question - Annotated Bibliography Example Indeed, the authors argued that ascertaining the wishes or choices of the patient for their care is a fundamental requirement in establishing an effective care plan. The study found that around 42% of the 380 participants with advanced cancer preferred palliative care of a more conservative nature, with the said percentage of patients actually choosing only one or two modes of treatment. Still, in addition to this preference, the authors also looked at what demographic characteristics predict the possibility of a choice between CPM and AAMM. Maida, et al. (2010) found that younger, non-Caucasian cancer patients who have with them substitute decision makers (SDM) are more likely to prefer more aggressive means of coping with the disease condition. This study was chosen as a significant literature because it aimed to quantify the characteristics and preferences of patients with their end-of-life care. By doing so, the study provides a rich background for the PIO question exploring which could be more effective in providing comfort at the end of life, CPM or AAMM, as perceived by the patients themselves. Rose, J. H., O’Toole, E. E., Dawson, N. V., Lawrence, R., Gurley, D., Thomas, C., et al. (2004). Perspectives, Preferences, Care Practices, and Outcomes Among Older and Middle-Aged Patients With Late-Stage Cancer. Journal of Clinical Oncology, 22 (24), 4907-4917. Much like the earlier study by Maida, et al. (2010), this research aimed to look into the preferences of terminal cancer patients in their care at the end of life. However, this study took a more qualitative approach, actually exploring the preferences for the care of the patients, and the degree by which these preferences were perceived to have provided comfort to the patient before their death. By utilizing a more in-depth exploration of the perceived effectiveness of different palliative methods, the researchers were able to point out which methods were most effective in promoting comfort

Sunday, November 17, 2019

Toyota's strategy for production efficiency Essay

Toyota's strategy for production efficiency - Essay Example ler’s end worked hand in hand to develop the sienna minivan so that in return Toyota could provide the much needed production values and techniques as far as automobile manufacturing was concerned, to Chrysler. This was seen as a one-off exercise, aimed at bridging the gap between Chrysler and Toyota, since each one of these companies was willing to learn a thing or two from the other’s realms. Chrysler was ready to share the information because it sought the help of Toyota in manufacturing the automobiles when it came to state of the art production techniques (Clifford 1998). Chrysler wanted to touch the benchmark in the industry and for that it was ready to go in an all-out fashion and work with the key automobile manufacturer, so that the benefits were mutual in the end. This was a very important merger for the two automobile manufacturers as the link provided for understanding each other’s strengths in their meticulous areas of expertise. Chrysler was willing to share its minivan know-how as it wanted to acquire the best of manufacturing skills and techniques, which is a very good initiative by the Chrysler make. In the end, the result was a win-win situation for everyone as quality was improved at both sides. 2. Many companies seek to cut costs and improve quality by introducing techniques such as just-in-time and quality circles. The results, however, often fall short of those achieved at Toyota. Why do you think this is the case? The results are usually lesser than expected. This is because the estimates are always based on the best practices but the on ground realities are usually different. The just-in-time and related quality mechanisms bank on the provision of quality at all costs, however what these processes forget is the fact that this is not always possible to reach new height and break fresh grounds. The companies thus have a hard time dealing with the improvement in quality and these techniques are not given much room to be exploited in the

Friday, November 15, 2019

Resistance To Change A Critical Analysis Management Essay

Resistance To Change A Critical Analysis Management Essay Now a days organizations are required to make changes for their survive. It is very important to response quickly to the modern technological advancement and competition to internal and external levels (Edmonds, 2011). So change is a everyday experience in private and governmental organization for its development. The purpose of this study is to analyse the issue of managing organizational change by various approaches. The paper will argue concisely on the factors of resistance to change and how the resistance is handled for successful implementation of a change plan through reviewing relevant literature on the topic. It will further examine the scope of effective management of organisational change process. In this paper, the analysis into effective management of resistance to organisational change is achieved through three main sections. Firstly, change is defined in the light of organisational development. Secondly, factors influencing change and resistance to change are discussed analytically in two consecutive sections. Finally, it discusses management of resistance to change elaborately before concluding the motion. What is change Change is defined as any alteration of the status quo (Bartol and Martin, 1994;199). Organizational change may be defined as new ways of organizing and workingà ¢Ã¢â€š ¬Ã‚ ¦.. (Dawson, 2003: 11 ). Breu and Benwell (1999), Ragsdell (2000) as well as Bamford and Forrester (2003), define organisational change as the process of moving an organisation from some present status to new status whether it is planned or unplanned. Organizational change is a form of difference from its long term old position to introduce a new idea and action for better performance and adjustment of new environment (Schalk et al.,1998). From different perspectives , we can observe different types of changes but in generally organisational changes can be classifieds into two types- incremental and radical (Ragsdell, 2000; McAdam, 2003; Milling Zimmermann, 2010). Literature argues that the incremental change is a small scale change on its present structure and functions which is continuous, on the other hand radical change involves a large-scale basic change (McAdam, 2003; Cunha, et al, 2003; Romanelli Tushman, 1994). Furthermore, Beugelsdijk et al (2002) argue that, organisational change process initially begins with radical change and follow the incremental change that creates a prospect or a threat. In contrast, Del Val and Fuentes (2003) state that change is a general procedure of response to organisational settings because real changes are not only incremental or transformational but also a mixture of both. However Bamford and Forrester (2003) have further classified organisational change as planned and emergents.The planned approach organisational change highlights the different status which an organisation will have to shift from an unacceptable position to recognized desired position (Eldrod II and Tippett, 2002). The emergent approach change suggests that it is an unpredictable and undesirable continuous process of adjustment to changing circumstances (Burnes, 1996, 2004; Dawson, 1994). But uncertainty of circumstances create emergent approach more significant than the planned approach (Bamford and Forrester, 2003). So, it is import to any organisation to identify the requirements for its prospects, and how to deal with the required changes and it is the unseparable strategy of an organization (Burnes, 2004; Rieley and Clarkson, 2001).Managerial proficiency is very much needed for successful change (Senior, 2002). Although for the existence and effective competition successful management of change is highly required (Luecke, 2003; Okumus and Hemmington, 1998). Factors Influencing Change: Hughes (2006) argues that, different factors can influence organisational changes, from the effect of internal control, to external rolls in consumer behaviour, or changing the business settings. The most common reasons are: Legislation, incorporation or attainment, competitive market, world economy, Structural change, technological advancement and Strategic re-organisation. Moreover, Haikonen et al (2004) argue that different important internal and external factors which influence change as policy, structure, control system, organisational culture, and power distribution. Moreover, Saka (2003) state that the external factors as national or international rules and regulations influence the organization to accept new strategies to survive in changed situation. Furthermore, many other factors related to market competition, economic growth, and living standard also oblige organisation to commence change programmes for update and manage the external forces (Beugelsdijk, et al, 2002; Breu Benwell, 1999; Carr Hancock, 2006). Consequently, the technological advancement create internal and external demands to generate the capabilities of organizations and assess their strategies regularly (Harris Wegg-Prosser, 2007; Ragsdell, 2000; Shaft, et al, 2008). Finally, Eisenbach et al (1999) also recognized different factors that compel change such as innovation, new technology, workforce, productivity and working quality. Similarly, McAdam (2003) and Mukherji and Mukherji (1998) emphasize that availability of skilled employees, changing customer behavior, free flow of information and cultural change make very impact on organization for modification on their activities and compel it to readjust or large scale change for transforming from deadlock to effectiveness. Finally, internal change factors like leadership, organizational culture, employee relationship, workload, reward system, internal politics, and communication system compel the organization to take up change strategy (Bhatnagar, et al, 2010; Potter, 2001; Van Marrewijk, et al, 2010; Young, 1999).On the whole, Breu and Benwell (1999) as well as Rees and Hassard (2010) emphasized the development of capabilities of managers to evaluate the situation exactly from different factors to effective management of resistance to change program. Resistance to Change Resistance is a phenomenon which affect the change process by slowing down its starting, obstructing its accomplishment and rising its costs(Ansoff, 1990; Del Val Fuentes, 2003; Young, 1999). In contrast, resistance is a manner that tries to maintain the status quo, so it is comparable to inertia which tries to avoid change (Maurer, 1996; Rumelt, 1995). Similarly, Jansen (1996), Potter (2001) as well as Romanelli and Tushman (1994) argue that organisational change permeates resistance from the persons as their calm sector are influenced by creating stress, insecurity and uncertainty. Moreover, Ford et al (2002) as well as Reissner (2010) support that resistance comes about since a change program threatens existing status, or causes fear of supposed consequences like trouble in personal security and apprehension about new capability and skills to perform in the changed surroundings.On the other hand, resistance by workforce may be seen as a general part of any change process and in t his manner a valuable source of knowledge and useful in learning how to manage successful change process (Antonacopoulou Gabriel, 2001; Bhatnagar, et al, 2010; Bovey Hede, 2001). Furthermore, Antonacopoulou and Gabriel (2001) and Lamb and Cox (1999) argue that unusual community will resist any change program for various reasons including misunderstanding, inconvenience, negative rumor, economic proposition, low tolerance for change and fear of the unknown. However, the observation of annoyance in long standing custom associated with change initiatives finally contribute in the appearance of resistance, mainly from middle managers who resist for the reason that of the fear of threat to their current position and supremacy (Marjanovic, 2000; Ragsdell, 2000; Saka, 2000). Moreover, in manipulative business environment, where major focus is on productivity and centralisation, occurrences higher rate of resistance than manipulative business units having a more open culture, giving freedom to explore new capacities and technologies (Mirow, et al, 2008; Valle, 2002).Accordingly, Lamb and Cox (1999) and Trader-Leigh (2002) indicate that dispute of resistance in public sector is much higher than that of private sector.However, Bovey and Hede (2001) as well as Del Val and Fuentes (2003) discover that when change principles and organizational principles are usually different then the workers show resistance to change while individual anxiety, ineffective management, failure precedent, little inspiration, insufficient tactical vision and pessimism are several sources of resistant. So, if the ground of change is not well planned and competently managed then the employees may prevent the change initiatives and they will apply protection policy to resist because of apprehension that they will be oppressed by others (Bovey Hede, 2001; Perren Megginson, 1996). Nevertheless, Jones et al (2008) argue that employees do not generally resist the change, but rather theoretical undesirable results of change or the process of execution the change.For that reason, all managers are necessary to give appropriate concentration on human and socio-cultural issues to obtain a distinct policy for successful implementation of change.(Diefenbach, 2007; Lamb Cox, 1999). How to manage Resistance Resistance to change is an important matter in change management and participatory approach is the best way to manage resistance for successful change(Pardo-del-Val et al., 2o12). Potter (2001) and Ragsdell (2000) support that resistance to organisational change have to be observed as a prospect and preparing people for change as well as permitting them to vigorously participate in the change process. Furthermore, Conner (1998) affirms that the negative effects of resistance occurred from major changes can be minimize by open discution. Moreover Judson (1991) asserts that effective change can be committed and resistance can be reduce by commitment and participation of employees. In addition, contemporary managers required to examine and categorize all the stakeholders as change worker, impartial, conservatives or resistor as per their function in resistance to change so as to apply obligatory approach upon the definite form of people so that they feel like accommodating the change pr ogram willingly (Chrusciel Field, 2006; Lamb Cox, 1999). Moreover, it is essential to engage people in all stages of the procedure for successful completion of change where effective communication of change objectives can play one of the most important roles (Becker, 2010; Beugelsdijk, et al, 2002; Frahm Brown, 2007; Lamb Cox, 1999). Accordingly, Potter (2001) as well as Van Hoek et al (2010) suggests that for managing resistance to change successfully, organisations must build up the capability to predict changes and working approaches to the changes and thereby engage the employees to face the challenges sincerely with complete preparation. Similarly, Caldwell (2003) and Macadam (1996) propose that smooth running of organization managers should be open for involvement of employees at every steps of decision making process and productivity. Moreover, usually resistance happens as a result of misinterpretation among peoples and hence, in each change program it is essential that everyone concerned realizes the reason following the change from upper level to the lower level where training and cooperation may speed up the procedure (Beugelsdijk, et al, 2002; Bovey Hede, 2001; Johnson, 2004; Taylor, 1999). In addition, at the moment of crisis and ambiguity people require results, accomplishments and successful communication which will assist reduce anxiety and eventually produce enthusiasm for change amongst the employees (Hill Collins, 2000a; Potter, 2001). Consequently, the new public management emphasizes new type of policies which presume a flexible, open and more creative structure and therefore proactively illustrative targets, setting superior examples and creating exciting position might be regarded as a number of core leadership capabilities essential for routing change (Beugelsdijk, et al, 2002; Chrusciel Field, 2006; Harris Wegg-Prosser, 2007). Moreover, Aladwani (2001) rationalizes that opening human abilities of the workers by permitting them to use their intelligence being innovative at work takes place to be important where the function of managers have to be renamed from manager to trainer as to donate continuously on self-confidence building all over the business. Furthermore, alongside the background of rapidly growing technological improvement and deregulation since the early 1990s, ritual approach can no longer arrange the modern perception of shocking ambiguity and insistent change relatively dispersed organisations are probable to authorize the employees (Caldwell (2003; Harris Wegg-Prosser, 2007). In addition, Andrews et al (2008) and Caldwell (2003) have the same opinion with Frahm and Brown (2007) that not like the conventional top-down bureaucratic systems; the present managers must receive bottom-up participatory strategy by discussing with stakeholders. Caldwell (2003) more recommends that change managers should uphold possession of the change approach along with the stakeholders by connecting them in the process, who distinguish the authenticity of the business and it is usually they who grasp answer key to the problems. Lastly, as contextualization is the main element of any societal and organisational change, in the twenty-first century circumstance, the status quo is not a suitable preference and organisations must get slant and vigorous for the modern world of digital convergence (Carr Hancock, 2006; Harris Wegg-Prosser, 2007; Milling Zimmermann, 2010). Moreover, Bamford and Forrester (2003), Diefenbach (2007) and Eisenbach et al (1999) consent that in the growing approach to managing change, elder managers transform themselves from administrator to facilitator and the major accountability of execution vest on the middle managers. Also, Diefenbach (2007) more highlights that middle managers should cooperate with peers, divisions, consumers, dealers and also with the senior managements as if they are the key player of organisational change programs. Furthermore, Bamford and Forrester (2003) as well as Diefenbach (2007) consider Lewinà ¢Ã¢â€š ¬Ã… ¸s (1958) three step model of freezing, unfreezing and refreezing, have supported that prior to effective implementation of any new manners, the old one has to be untrained.

Tuesday, November 12, 2019

Stem Cell Therapy Essay

Sepulveda Bio. Anthro. Tues 6-9 Cell Replacement and Stem Cell Therapy to Treat Neurodegenerative Disease Stem cell therapy is being used to treat neurodegenerative diseases such as amyotrophic lateral sclerosis or ALS, commonly referred to as Lou Gehrig’s disease. The disease itself, new therapies and treatments, along with a cure are currently being studied by universities and stem cell researchers. ALS is a progressive neurodegenerative disease which attacks the neurons in the brain and spinal cord that control voluntary movement, eventually leading to respiratory failure and death (Kamel et al. 2008). The current course of action for a patient with ALS is physical therapy and, if their budget allows, cell replacement therapy. However there is presently no cure and the patient will eventually have respiratory problems and die from the disease. Adult stem cells (ASCs) and blastocyst or embryonic stem cells (ESCs) are being used to treat amyotrophic lateral sclerosis in cell replacement therapy, yet this only slows the degeneration of their neurons (Goldman, Windrem, 2006).Research for both adult stem cell and blastocyst stem cell technologies are the only practical option in approaching a cure or more effective treatment for ALS. Both of these technologies require stem cells, but are challenging to safely retrieve and utilize through the current treatment methods, which is why it is essential to continue to support and fund this research. Cell replacement therapy is currently the only stem cell treatment of neurodegenerative diseases such as ALS, but researchers are trying to find new ways of treating and possibly curing ALS.Cell augmentation using stem cells could be the future of treatment for ALS but scientists are currently working to increase availability of the needed ESCs and ASCs to treat patients using cell replacement therapy. There are three different ways to harvest the necessary stem cells for neuron replacement: growing ESCs in vitro, har vesting stem cells from the brain or spinal cord of a live donor through biopsy, and harvesting from the brain or spinal cord of a donor post mortem (Sohur et. al. , 2006). The goal of treatment of ALS is to slow and eventually stop cell loss progressing to the point of functional impairment.To accomplish this goal, protecting the remaining neurons as well as replacing and augmenting damaged neurons is important. The ultimate goal, to cure ALS, is to fully restore authentic neuronal circuitry or â€Å"full systems reconstruction† (Ormerod et. al. 2008). Full systems reconstruction would consist of recreating a map of precisely patterned neurons of the correct type using the stem cells to send projections to the appropriate field within the brain. The cure seems virtually impossible with the technology currently available, but recreating neurogenesis may be possible in the future.Adult stem cell harvesting is difficult and costly when retrieving the stem cells needed to treat neurodegenerative diseases from brain matter or spinal fluid. Neurons are very specific cells in the brain and spinal cord and possess a special set of neurotransmitters depending on their function; this poses problems when harvesting ASCs (Zhang et. al. , 2006). The ASCs needed to treat ALS must be able to specialize and replace degenerating neurons affected by the disease. This procedure would not be possible without using stem cells to replace the damaged and degenerating neurons.However a problem associated with ASCs is rejection of foreign cells when transplanting ASCs taken through biopsy from a donor. Although biopsy from the patient receiving treatment is an option, the ASCs required come from the brain or spinal cord and can be very dangerous to harvest this way. Adult neural stem cells can be harvested from brain tissue, either from a deceased donor or through biopsy, and then grown in a culture (Ormerod et. al. , 2008). ASCs will not expand nearly as much as ESCs in cultu re and will differentiate into a limited number of neuron types.When using ESCs, which conform to the necessary specialized type of neurons, the lack of flexibility encountered in the ASCs is eliminated. Human embryonic stem cells (ESCs), however difficult to harvest initially, will multiply greatly when grown in culture. The ESCs are generated by in vitro fertilization and grown into the blastocyst stage before harvesting. The advantages of ESCs are boundless; the results of the therapy would not be obtainable without use of the stem cells to replace the damaged cells.The ease and frequency with which ESCs can be expanded in culture is a significant advantage over ASCs. Growing such high numbers of stem cells in this fashion can prove problematic though, while the cells reproduce indefinitely they become more susceptible to mutation and may cause tumors following transplant (Ormerod et. al. , 2008). Thus, a challenge rises to differentiate the cells fully before transplant or to gr ow many more cultures from different donor eggs, which are difficult and expensive to receive.ESCs are more easily specialized into neurons, oligodendrocytes, and glia needed to treat ALS than ASCs; but the possibility of tumors forming in the patient along with the cost and complication of creating new chains of blastocysts from donor eggs pose a disadvantage of using this technology (Ormerod et. al. , 2008). Taking into consideration ESC technology’s advantages and disadvantages, it is equally as viable an approach to a cure for ALS as ASC technology. ALS is an extremely destructive disease which unfortunately plagues a large population.ALS is difficult to treat because it is a neurodegenerative disease and requires brain surgery and neuron replacement. Both adult stem cell and embryonic stem cell therapies have potential to increase the quality of life for patients with ALS but they both have their own individual inherent risk that must be taken into account by the patient and doctors when choosing a stem cell therapy method. Donors are few and far between and the necessary cells are very specific for this particular procedure.Through an increase in research and development of new ways to multiply and store stem cells, along with an increase in donors, the road toward a cure will be a short one. Hopefully in the future the treatment will become easier, less costly, and less dangerous for the patient. Works Cited Larsen CS. 2010. Essentials of Physical Anthropology: Discovering Our Origins. New York and London: W. W. Norton & Company Ormerod, B. K. , Palmer, T. D. , & Maeve, A. C. (2008). Neurodegeneration and cell replacement. Philosophical Transactions: Biological , 363(1489), 153-170.Retrieved from http://www. jstor. org/stable/20210044 Sohur, U. S. , Emsley, J. G. , Mitchell, B. D. , & Macklis, J. D. (2006). Adult neurogenesis and cellular brain repair with neural progenitors, precursors and stem cells. Philosophical Transactions: Biological Scien ces, 361(1473), 1477-1497. Retrieved from http://www. jstor. org/stable/20209745 Kamel,, F. , Umbach, D. M. , Stallone, L. , Richards, M. , Hu, H. , & Sandler, D. P. (2008). Association of lead exposure with survival in amyotrophic lateral sclerosis.Evironmental Health Perspectives, 116(7), 943-947. Retrieved from http://www. jstor. org/stable/25071103 Goldman, S. A. , & Windrem, M. S. (2006). Cell replacement therapy in neurological disease. Philosophical Transactions: Biological Sciences, 361(1473), 1463-1475. Retrieved from http://www. jstor. org/stable/20209744 Zhang, S. , Li, X. , Johnson, A. , & Pankratz, M. T. (2006). Human embryonic stem cells for brain repair?. Philosophical Transactions: Biological Sciences, 363(1489), 87-99. Retrieved from http://www. jstor. org/stable/20210040

Sunday, November 10, 2019

Advantages and Disadvantages to Society Essay

Humans have become so dependent on electricity and society’s evolution to a great extent has been based on it. In the absence of lights, computers, most methods of transportation and communication, the last hundred years of advancement could be set back. With these things considered, electricity could clearly be regarded as man’s greatest discovery. However, in as much as electricity has played a major role in the progress of humankind, it has also contributed widely into the sluggish destruction of society. Therefore, electricity has both an advantageous and disadvantageous effects on society. Electricity is an invisible form of energy created by the movement of charged particles, a phenomenon that is a result of the existence of electrical charge. It flows into our homes along wires and can be easily converted into other energy forms, such as heat and light. It can be stored in batteries or sent along wires to make electric trains, computers, light bulbs and other devices work. The comprehension of electricity has directed to the invention of generators, computers and nuclear-energy systems, X-ray devices, motors, telephones, radio and television. (Grolier Encyclopedia of Knowledge, 2002) Everything in the world, including humans and the air they breathe, is made of atoms. Each of these tiny particles has a positively charged center, named as nucleus, with smaller, negatively charged electrons whizzing around it. Electricity is created when one of the electrons jump to another atom. This can be caused by the magnetic field in a generator, by chemicals in a battery, or by friction (rubbing materials together). Early History The breakthrough discovery that an electric charge could be created by rubbing two materials together was first made by the Greek Philosopher Thales around 600 BC. He found that if he rubbed the fossilized tree sap, amber, with silk, it attracted small light objects such as feathers and dust. However, the first realistic device for the generation of electrical energy was not invented until 1800 when the Italian physicist Alessandro Volta constructed the first crude battery. For centuries, this strange, puzzling property was thought to be limited to amber. Two thousand years later, in the 16th century, William Gilbert provided evidence that many other substances are electric. He gave these substances the Latin name electrica, originating from the Greek word elektron (which means â€Å"amber†). According to the 2008 Encyclopedia Americana, the word magnet, comes from the Greek name for the black stones from Magnesia in Asia Minor. Sir Thomas Browne, an English writer and physician, first used the word electricity in 1646. Relationships between electricity and magnetism were devised in 1820 by the Danish physicist H. C. Oersted and the French physicist D. F. J. Arago from studies of the effects of a current-carrying conductor on a compass needle or iron filings. That same year, the French physicist Andre Ampere showed that an electric current flowing through a wire created a magnetic field similar to that of a permanent magnet. In 1831, the English physicist Michael Faraday conceived a device for converting mechanical energy to electrical energy. Faraday’s machine, the first dynamo (DC generator), was made up of a copper disk rotating between the poles of a permanent magnet. A year later, Hippolyte Pixii of France, built both an AC generator and a DC generator, the latter being fitted with a commutator. Such primeval generators were widely used for experimental purposes. Nonetheless, they could not generate a great deal of power because the field strength of their permanent magnets was slight. In 1866, the German inventor Werner von Siemens initiated the use of electromagnets instead of permanent magnets for the field poles of a DC generator. In 1870, the Belgian inventor Zenobe Gramme further improved the performance of DC generators by using armatures of iron wound with rings of insulated copper wire. Powered by counteracting steam engines, Gramme’s generators were used to supply current for arc lamps in lighthouses and factories. Electric arc street lamps were installed in Paris in 1879, in Cleveland, Ohio, in 1879, and in New York City in 1880. However, the carbon filament incandescent lamp invented by Thomas Edison and Joseph Swan in 1880 provided a far better and more suitable source of light than arc lamps did. This invention created a great demand for electric power as it marked the beginning of the electric power industry. Electricity was a mystifying force. It did not seem to occur naturally at initial appearance, except in the frightening form of lightning. Researchers had to do an atypical thing to study electricity; they had to manufacture the phenomenon before they could analyze it. We have come to realize that electricity is everywhere and that all matter is electrical in nature. Many innovators in the study of magnetism and electricity become known between the late 1700s and the early 1800s, many of whom left their names on several electrical units. These scientists include Charles Augustin de Coulomb (the unit of charge), Andre Ampere (current), George Ohm (resistance), James Watt (electrical power), and James Joule (energy). Luigi Galvani gave us the galvanometer, a device for measuring currents, while Alessandro Volta gave us the volt, a unit of potential, or electromotive force. Similarly C. F. Gauss, Hans Christian Oersted, and W. E. Weber all made their mark and established their names on electrical engineering. Only Benjamin Franklin failed to leave his name on any electrical unit, despite his noteworthy contributions. All of the afore-mentioned scientists contributed to the study of electricity. However, the two real giants in the field were 19th century Englishmen, Michael Faraday and James Clerk Maxwell. The widespread use of electricity as a source of power is largely due to the work of pioneering American engineers and inventors such as Nikola Tesla, and Charles Proteus Steinmetz during the late 19th and early 20th centuries (Microsoft Encarta Reference Library 2002). One of the most well-known perhaps is Thomas Alva Edison, most famous for his development of the first commercially practical incandescent lamp. He was one of the most prolific inventors of the late 19th century and his greatest contribution is the development of the world’s first central-electric-light-power-station. By the time he died in West Orange, New Jersey, he had patented over 1000 inventions. (Jenkins, R. 2000) II. BODY Electrical activity takes place constantly everywhere in the universe. Electrical forces hold molecules together. The nervous systems of animals work by way of weak electric signals transmitted between nerve cells called neurons. Electricity is generated, transmitted, and converted into other forms of energy such as heat, light and motion through natural processes, as well as by devices built by people. Over the period from 1950 to 1999, the most recent year for which data are available, annual world electric power production and consumption rose from slightly less than 1,000 billion kilowatt hours to 14,028 billion kwh. A change also took place in the type of power generation. In 1950, about 2/3 of the electricity came from thermal or steam-generating sources and about 1/3 from hydroelectric sources. In 1998, thermal sources produced sixty-three percent of the power, but hydropower had declined to nineteen percent, and nuclear power accounted for seventeen percent of the total. The growth in nuclear power slowed in some countries, markedly the United States, in reaction to concerns about safety. Nuclear plants generated twenty percent of U. S. electricity in 1999; in France, the world leader, the figure was 76 percent.

Friday, November 8, 2019

Astronomers Essays - Cosmologists, Atacama Desert, Free Essays

Astronomers Essays - Cosmologists, Atacama Desert, Free Essays Astronomers Part One Brief Descriptions of the Following Astronomers: Walter Baade : Baade was a German-born American, whose work gave new estimates for the age and size of the universe. During the wartime, blackouts aided his observatons and allowed him to indentify and classify stars in a new and useful way, and led him to increase and improve Hubble's values for the size and age of the universe (to the great relief of geologists.) He also worked on supernovae and radiostars. Milton Humason : Humason was a colleague of Edwin Hubble's at Mt. Wilson and Palomar Mtn. who was instrumental in measuring faint galaxy spectra providing evidence for the expansion of the universe. Jan Oort : In 1927, this Dutch astronomer proved by observation (in the Leiden observatory) that our galaxy is rotating, and calculated the sirance of the sun from the centre of the galaxy and the period of its orbit. In 1950 he sugested the exsistence of a sphere of incipent cometary material surrounding the solar system, which is now called the 'Oort cloud.' He proposed that comets detached themsleves from this 'Oort- cloud' and went into orbit around the sun. From 1940 onwards he carried out notable work in radio astronomy. Harlow Shapley : Shapley deduced that the Sun lies near the central plane of the Galaxy some 30,000 light- years away from the centre. In 1911 Shapley, working with results given by Henry N. Russell, began finding the dimensions of stars in a number of binary systems from measurements of their light variation when they eclipse one another. These methods remained the standard procedure for more than 30 years. Shapley also showed that Cepheid variables cannot be star pairs that eclipse each other. He was the first to propose that they are pulsating stars. In the Mount Wilson Observatory, Pasadena Calif., in 1914, he made a study of the distribution of the globular clusters in the Galaxy; these clusters are immense, densely packed groups of stars, some containing as many as 1,000,000 members. He found that of the 100 clusters known at the time, one-third lay within the boundary of the constellation Sagittarius. Utilizing the newly developed concept that variable stars accurately reveal their distance by their period of variation and apparent brightness, he found that the clusters were distributed roughly in a sphere whose centre lay in Sagittarius. Since the clusters assumed a spherical arrangement, it was logical to conclude that they would cluster around the centre of the Galaxy; from this conclusion and his other distance data Shapley deduced that the Sun lies at a distance of 50,000 light-years from the centre of the Galaxy; the number was later corrected to 30,000 light-years. Before Shapley, the Sun was believed to lie ne! ar the centre of the Galaxy. His work, which led to the first realistic estimate for the actual size of the Galaxy, thus was a milestone in galactic astronomy. Allan Sandage : Sandage (U.S) discovered the first quasi-stellar radio source (quasar), a starlike object that is a strong emitter of radio waves. He made the discovery in collaboration with the U.S. radio astronomer Thomas A. Matthews. Sandage became a member of the staff of the Hale Observatories (now the Mount Wilson and Palomar Observatories), in California, in 1952 and carried out most of his investigations there. Pursuing the theoretical work of several astronomers on the evolution of stars, Sandage, with Harold L. Johnson, demonstrated in the early 1950s that the observed characteristics of the light and colour of the brightest stars in various globular clusters indicate that the clusters can be arranged in order according to their age. This information provided insight into stellar evolution and galactic structure. Later, Sandage became a leader in the study of quasi-stellar radio sources, comparing accurate positions of radio sources with photographic sky maps and then using a large optical telescope to find a visual starlike source at the point where the strong radio waves are being emitted. Sandage and Matthews identified the first of many such objects Sandage later discovered that some of the remote, starlike objects with similar characteristics are not radio sources. He also found that the light from a number of the sources varies rapidly and irregularly in intensity. Part Two Cerro Tololo Interamerican

Wednesday, November 6, 2019

North American Birch Tree Identification

North American Birch Tree Identification Most everyone has some recognition of the birch tree, a tree with light-colored white, yellow, or grayish bark that often separates into thin papery plates and is characteristically marked with long horizontal  dark raised lines (also known as lenticils). But how can you identify birch trees and their leaves in order to tell different types apart? Characteristics of North American Birch Trees Birch species are generally small- or medium-sized trees or large shrubs, mostly found in northern temperate climates in Asia,  Europe, and North America.  The simple leaves may be toothed or pointed with serrated edges, and the fruit is a small samara- a small seed with papery wings. Many types of birch grow in clumps of two to four closely spaced separate trunks.   All North American birches have double-toothed leaves and are yellow and showy in the fall. Male catkins appear in late summer near the tips of small twigs or long shoots. The female cone-like catkins follow in the spring and bare small winged samaras drop from that mature structure. Birch trees are sometimes confused with beech and alder trees. Alders, from the family  Alnus, are very similar to the birch; the principal distinguishing feature is that alders have catkins that are woody and do not disintegrate in the way that birch catkins do. Birches also have bark that more readily layers into segments; alder bark is fairly smooth and uniform. The confusion with beech trees stems from the fact the beech also has light-colored bark and serrated leaves. But unlike the birch, beeches have smooth bark that often has a skin-like appearance and they tend to grow considerably taller than birches, with thicker trunks and branches.   In the native environment, birches are considered pioneer species, which means that they tend to colonize in open,  grassy areas, such as spaces cleared by forest fire or abandoned farms.  You will often find them in meadowy areas, including meadows where cleared farmland is in the process of reverting to woodlands.   Interestingly, the sweet sap of the birch can be reduced into syrup and was once used as birch beer. The tree is valuable to wildlife species that depend on the catkins and seeds for food, and the trees are an important timber for woodworking and cabinetry. Taxonomy All birches fall into the general plant family of  Betulaceae, which are closely related to the  Fagaceae  family, including beeches and oaks. The various birch species fall into the  Betula  genus, and there are several that are common North American trees in natural environments or used for landscape design purposes. Because in all beech species the leaves and catkins are similar and they all have very much the same foliage color, the main way to distinguish the species is by close examination of the bark.   4  Common Birch Species The four most common birch species in North America are described below.   Paper birch (Betula  papyrifera): Also known as canoe birch, silver birch, or white birch, this is the species more widely recognized as the iconic birch. In its native environment, it can be found in forest borders across the northern and central U.S. Its bark is dark when the tree is young, but quickly develops the characteristic bright white bark that peels so readily in thick layers that it was once used to make bark canoes. The species grows to about 60 feet tall but is relatively short-lived. It is susceptible to borer insects and is no longer used widely in landscape design due to its susceptibility to damage.  River birch (Betula nigra): Sometimes called black birch, this species has a much darker trunk than the paper birch, but still has the characteristic flaky surface. In its native environment, it is common to the eastern third of the U.S. Its trunk has a much rougher, coarser appearance than most of the other birches, and it is bigger than the paper birch, sometimes growing to 80 feet or more. It prefers moist soil, and although short-lived, it is relatively immune to most diseases. It is a common  choice in residential landscape design.   Yellow birch (Betula alleghaniensis): This tree is native to forests of the northeast U.S. and is also known as the swamp birch due to the fact that it is often found in marshy areas. It is the largest of the birches, easily growing to 100 feet in height.  It has silvery-yellow bark that peels in very thin layers. Its bark does not have the thick layers seen in paper birches nor the very rough texture seen in river birches.  Sweet birch (Betula lenta): This species, also known in some areas as the cherry birch, is native to the eastern U.S., especially the Appalachian region. Growing to 80 feet, its bark is dark in color, but unlike the dark river birch, the skin is relatively tight and smooth, with deep vertical scores. From a distance, the impression is of a smooth, silver bark marked by irregular vertical black lines.

Sunday, November 3, 2019

W3Q-Executing and implementing project portfolio management Assignment

W3Q-Executing and implementing project portfolio management - Assignment Example NDT-Solutions, a private sector company specializing in development and construction of facility for non-destructive testing laboratory, takes a conventional approach to project management mainly through assigning projects to departmental managers. This approach functional managers to act as project managers while performing their primary duties in parallel, a practice which was very common in 1960s (Kerzner 2010). A major reason for such an approach was the lack of appreciation for project management methodology and best practices as identified by Project Management Institute in PMBOK (2008). Thus projects in the company normally initiated within from the departments without considering their relevance to the organizational business strategy. Thus majority of the projects were a result of self-initiative even without support and consent of management. In addition, internal politics and individual interests barred projects and their outcomes to be visible to other departments unless materialized. Thus other departments who may have significant role in the project execution or may have been impacted by the project outcome were, in fact, alienated from the project. Every department struggled to portray its project as success while endeavoring to undermine the efforts made in other departments. This impacted the overall potential of the company to achieve synergy of its resources. The end result was duplication of projects, inefficient utilization of the resources and discouragement of any innovative ideas at the organizational level. Besides its drawbacks, the approach has advantages in terms of expanding experience in project management and improving the skills; however, lack of training and appreciation of a standardized project portfolio management approach would increase risk of inefficient resource utilization and duplication of efforts especially when the company is managing multiple projects. According to Kerzner (2010), â€Å"portfolio management is

Friday, November 1, 2019

Assignment Example | Topics and Well Written Essays - 500 words - 24

Assignment Example Without reducing aggression and violence, it is impossible to create peace, harmony law & order across Africa, which are the prerequisites for development (Brito, 2010). It should be pointed out that Rwanda ethnic violence was started in April 1994 during which more than 0.8 million civilians and politicians (mostly Tutsis), police officers and military soldiers lost their lives. It was one of the worst genocides in the history which was triggered after the assassination of Rwandan President named Juvenal Habyarimana, a representative from majority Hutu group. Unequivocally, the Tutsis were alleged for targeting the plane of President by rockets in an attempt to debilitate Hutus across Rwanda. However, it should not be forgotten that tensions between Hutus and Tutsis already existed; therefore, the President’s assassination should be considered as the only motive that sparked this brutal massacre. It is worthwhile to mention that Paul Kagame was among the founder leaders of Rwandan Patriotic Front (RPF), which was actually a rebel group working under explicit and implicit support of Tutsi refugees in Rwanda followed by assistance by ‘mo derate Hutus’. Indeed, the actual cause of this ethnic turmoil was the unilateral and open support to Hutu civilians by Rwandan military, political groups, police and entrepreneurs that were strictly against the presence of Tutsis across Rwanda. Also, they organized a rebel group named ‘Interahamwe’ to carry out this genocide (BBC Report, 2008). It is worthwhile to point out that African continent has remained the centre of civil disobedience movements, internal conflicts, ethnic turmoil, rebellions, militancy, religious intolerance, war among nations, bad governance and political unrest because of weaknesses in systems and political institutions. Indeed, some major countries where the aforementioned factors have been observed